00:00:00.001 Started by upstream project "autotest-per-patch" build number 122834 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.134 Using shallow fetch with depth 1 00:00:00.134 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.134 > git --version # timeout=10 00:00:00.153 > git --version # 'git version 2.39.2' 00:00:00.153 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.154 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.154 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.056 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.068 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.078 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.078 > git config core.sparsecheckout # timeout=10 00:00:05.089 > git read-tree -mu HEAD # timeout=10 00:00:05.105 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.123 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.123 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.199 [Pipeline] Start of Pipeline 00:00:05.214 [Pipeline] library 00:00:05.216 Loading library shm_lib@master 00:00:05.216 Library shm_lib@master is cached. Copying from home. 00:00:05.234 [Pipeline] node 00:00:05.251 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.252 [Pipeline] { 00:00:05.262 [Pipeline] catchError 00:00:05.264 [Pipeline] { 00:00:05.274 [Pipeline] wrap 00:00:05.281 [Pipeline] { 00:00:05.286 [Pipeline] stage 00:00:05.288 [Pipeline] { (Prologue) 00:00:05.456 [Pipeline] sh 00:00:05.744 + logger -p user.info -t JENKINS-CI 00:00:05.765 [Pipeline] echo 00:00:05.766 Node: GP11 00:00:05.773 [Pipeline] sh 00:00:06.073 [Pipeline] setCustomBuildProperty 00:00:06.086 [Pipeline] echo 00:00:06.087 Cleanup processes 00:00:06.095 [Pipeline] sh 00:00:06.377 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.377 2088082 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.389 [Pipeline] sh 00:00:06.670 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.670 ++ grep -v 'sudo pgrep' 00:00:06.670 ++ awk '{print $1}' 00:00:06.670 + sudo kill -9 00:00:06.670 + true 00:00:06.684 [Pipeline] cleanWs 00:00:06.696 [WS-CLEANUP] Deleting project workspace... 00:00:06.696 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.703 [WS-CLEANUP] done 00:00:06.709 [Pipeline] setCustomBuildProperty 00:00:06.724 [Pipeline] sh 00:00:07.005 + sudo git config --global --replace-all safe.directory '*' 00:00:07.079 [Pipeline] nodesByLabel 00:00:07.080 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.089 [Pipeline] httpRequest 00:00:07.094 HttpMethod: GET 00:00:07.094 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.098 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.102 Response Code: HTTP/1.1 200 OK 00:00:07.103 Success: Status code 200 is in the accepted range: 200,404 00:00:07.103 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.849 [Pipeline] sh 00:00:08.133 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.149 [Pipeline] httpRequest 00:00:08.153 HttpMethod: GET 00:00:08.154 URL: http://10.211.164.101/packages/spdk_0ed7af4468c66ce6a579b1abd9f61e1eeef29850.tar.gz 00:00:08.155 Sending request to url: http://10.211.164.101/packages/spdk_0ed7af4468c66ce6a579b1abd9f61e1eeef29850.tar.gz 00:00:08.167 Response Code: HTTP/1.1 200 OK 00:00:08.168 Success: Status code 200 is in the accepted range: 200,404 00:00:08.168 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0ed7af4468c66ce6a579b1abd9f61e1eeef29850.tar.gz 00:00:30.028 [Pipeline] sh 00:00:30.308 + tar --no-same-owner -xf spdk_0ed7af4468c66ce6a579b1abd9f61e1eeef29850.tar.gz 00:00:32.850 [Pipeline] sh 00:00:33.132 + git -C spdk log --oneline -n5 00:00:33.132 0ed7af446 test/nvmf: move gen_chap_key() to common.sh 00:00:33.132 8e817b0c0 nvmf: dump qpair's auth state 00:00:33.132 eb267841c nvmf/auth: send DH-HMAC-CHAP_success1 message 00:00:33.132 f0bf11db4 nvmf/auth: execute DH-HMAC-CHAP_reply message 00:00:33.132 2b14ffc34 nvmf: method for getting DH-HMAC-CHAP keys 00:00:33.144 [Pipeline] } 00:00:33.160 [Pipeline] // stage 00:00:33.170 [Pipeline] stage 00:00:33.172 [Pipeline] { (Prepare) 00:00:33.190 [Pipeline] writeFile 00:00:33.208 [Pipeline] sh 00:00:33.488 + logger -p user.info -t JENKINS-CI 00:00:33.503 [Pipeline] sh 00:00:33.782 + logger -p user.info -t JENKINS-CI 00:00:33.794 [Pipeline] sh 00:00:34.073 + cat autorun-spdk.conf 00:00:34.073 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.073 SPDK_TEST_NVMF=1 00:00:34.073 SPDK_TEST_NVME_CLI=1 00:00:34.073 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.073 SPDK_TEST_NVMF_NICS=e810 00:00:34.073 SPDK_TEST_VFIOUSER=1 00:00:34.073 SPDK_RUN_UBSAN=1 00:00:34.073 NET_TYPE=phy 00:00:34.080 RUN_NIGHTLY=0 00:00:34.085 [Pipeline] readFile 00:00:34.107 [Pipeline] withEnv 00:00:34.109 [Pipeline] { 00:00:34.123 [Pipeline] sh 00:00:34.402 + set -ex 00:00:34.402 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:34.402 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.402 ++ SPDK_TEST_NVMF=1 00:00:34.402 ++ SPDK_TEST_NVME_CLI=1 00:00:34.402 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.402 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.402 ++ SPDK_TEST_VFIOUSER=1 00:00:34.402 ++ SPDK_RUN_UBSAN=1 00:00:34.402 ++ NET_TYPE=phy 00:00:34.402 ++ RUN_NIGHTLY=0 00:00:34.402 + case $SPDK_TEST_NVMF_NICS in 00:00:34.402 + DRIVERS=ice 00:00:34.402 + [[ tcp == \r\d\m\a ]] 00:00:34.402 + [[ -n ice ]] 00:00:34.402 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:34.403 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:38.588 rmmod: ERROR: Module irdma is not currently loaded 00:00:38.588 rmmod: ERROR: Module i40iw is not currently loaded 00:00:38.588 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:38.588 + true 00:00:38.588 + for D in $DRIVERS 00:00:38.588 + sudo modprobe ice 00:00:38.588 + exit 0 00:00:38.598 [Pipeline] } 00:00:38.615 [Pipeline] // withEnv 00:00:38.620 [Pipeline] } 00:00:38.637 [Pipeline] // stage 00:00:38.646 [Pipeline] catchError 00:00:38.648 [Pipeline] { 00:00:38.665 [Pipeline] timeout 00:00:38.665 Timeout set to expire in 40 min 00:00:38.667 [Pipeline] { 00:00:38.684 [Pipeline] stage 00:00:38.686 [Pipeline] { (Tests) 00:00:38.701 [Pipeline] sh 00:00:38.980 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.980 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.980 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.980 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:38.980 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.980 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:38.980 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.980 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:38.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:38.980 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:38.980 + source /etc/os-release 00:00:38.980 ++ NAME='Fedora Linux' 00:00:38.981 ++ VERSION='38 (Cloud Edition)' 00:00:38.981 ++ ID=fedora 00:00:38.981 ++ VERSION_ID=38 00:00:38.981 ++ VERSION_CODENAME= 00:00:38.981 ++ PLATFORM_ID=platform:f38 00:00:38.981 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:38.981 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.981 ++ LOGO=fedora-logo-icon 00:00:38.981 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:38.981 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.981 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:38.981 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.981 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.981 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.981 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:38.981 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.981 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:38.981 ++ SUPPORT_END=2024-05-14 00:00:38.981 ++ VARIANT='Cloud Edition' 00:00:38.981 ++ VARIANT_ID=cloud 00:00:38.981 + uname -a 00:00:38.981 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:38.981 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:40.373 Hugepages 00:00:40.373 node hugesize free / total 00:00:40.373 node0 1048576kB 0 / 0 00:00:40.373 node0 2048kB 0 / 0 00:00:40.373 node1 1048576kB 0 / 0 00:00:40.373 node1 2048kB 0 / 0 00:00:40.373 00:00:40.373 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:40.373 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:40.373 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:40.373 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:40.373 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:40.373 + rm -f /tmp/spdk-ld-path 00:00:40.373 + source autorun-spdk.conf 00:00:40.373 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.373 ++ SPDK_TEST_NVMF=1 00:00:40.373 ++ SPDK_TEST_NVME_CLI=1 00:00:40.373 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.373 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.373 ++ SPDK_TEST_VFIOUSER=1 00:00:40.373 ++ SPDK_RUN_UBSAN=1 00:00:40.373 ++ NET_TYPE=phy 00:00:40.373 ++ RUN_NIGHTLY=0 00:00:40.373 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:40.373 + [[ -n '' ]] 00:00:40.373 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.373 + for M in /var/spdk/build-*-manifest.txt 00:00:40.373 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:40.374 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:40.374 + for M in /var/spdk/build-*-manifest.txt 00:00:40.374 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:40.374 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:40.374 ++ uname 00:00:40.374 + [[ Linux == \L\i\n\u\x ]] 00:00:40.374 + sudo dmesg -T 00:00:40.374 + sudo dmesg --clear 00:00:40.374 + dmesg_pid=2088849 00:00:40.374 + [[ Fedora Linux == FreeBSD ]] 00:00:40.374 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:40.374 + sudo dmesg -Tw 00:00:40.374 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:40.374 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:40.374 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:40.374 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:40.374 + [[ -x /usr/src/fio-static/fio ]] 00:00:40.374 + export FIO_BIN=/usr/src/fio-static/fio 00:00:40.374 + FIO_BIN=/usr/src/fio-static/fio 00:00:40.374 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:40.374 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:40.374 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:40.374 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:40.374 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:40.374 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:40.374 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:40.374 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:40.374 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.374 Test configuration: 00:00:40.374 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.374 SPDK_TEST_NVMF=1 00:00:40.374 SPDK_TEST_NVME_CLI=1 00:00:40.374 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.374 SPDK_TEST_NVMF_NICS=e810 00:00:40.374 SPDK_TEST_VFIOUSER=1 00:00:40.374 SPDK_RUN_UBSAN=1 00:00:40.374 NET_TYPE=phy 00:00:40.374 RUN_NIGHTLY=0 02:17:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:40.374 02:17:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:40.374 02:17:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:40.374 02:17:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:40.374 02:17:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.374 02:17:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.374 02:17:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.374 02:17:27 -- paths/export.sh@5 -- $ export PATH 00:00:40.374 02:17:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:40.374 02:17:27 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:40.374 02:17:27 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:40.633 02:17:27 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715732247.XXXXXX 00:00:40.633 02:17:27 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715732247.8Bzwqh 00:00:40.633 02:17:27 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:40.633 02:17:27 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:40.633 02:17:27 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:40.633 02:17:27 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:40.633 02:17:27 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:40.633 02:17:27 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:40.633 02:17:27 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:40.633 02:17:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.633 02:17:27 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:40.633 02:17:27 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:40.633 02:17:27 -- pm/common@17 -- $ local monitor 00:00:40.633 02:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.633 02:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.633 02:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.633 02:17:27 -- pm/common@21 -- $ date +%s 00:00:40.633 02:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:40.633 02:17:27 -- pm/common@21 -- $ date +%s 00:00:40.633 02:17:27 -- pm/common@25 -- $ sleep 1 00:00:40.633 02:17:27 -- pm/common@21 -- $ date +%s 00:00:40.633 02:17:27 -- pm/common@21 -- $ date +%s 00:00:40.633 02:17:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732247 00:00:40.633 02:17:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732247 00:00:40.633 02:17:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732247 00:00:40.633 02:17:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732247 00:00:40.633 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732247_collect-vmstat.pm.log 00:00:40.633 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732247_collect-cpu-load.pm.log 00:00:40.633 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732247_collect-cpu-temp.pm.log 00:00:40.633 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732247_collect-bmc-pm.bmc.pm.log 00:00:41.572 02:17:28 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:41.572 02:17:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:41.572 02:17:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:41.572 02:17:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.572 02:17:28 -- spdk/autobuild.sh@16 -- $ date -u 00:00:41.572 Wed May 15 12:17:28 AM UTC 2024 00:00:41.572 02:17:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:41.572 v24.05-pre-631-g0ed7af446 00:00:41.572 02:17:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:41.572 02:17:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:41.572 02:17:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:41.572 02:17:28 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:41.572 02:17:28 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:41.572 02:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.572 ************************************ 00:00:41.572 START TEST ubsan 00:00:41.572 ************************************ 00:00:41.572 02:17:28 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:41.572 using ubsan 00:00:41.572 00:00:41.572 real 0m0.000s 00:00:41.572 user 0m0.000s 00:00:41.572 sys 0m0.000s 00:00:41.572 02:17:28 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:41.572 02:17:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:41.572 ************************************ 00:00:41.572 END TEST ubsan 00:00:41.572 ************************************ 00:00:41.572 02:17:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:41.572 02:17:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:41.572 02:17:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:41.572 02:17:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:41.572 02:17:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:41.572 02:17:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:41.572 02:17:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:41.572 02:17:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:41.572 02:17:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:41.572 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:41.572 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:41.830 Using 'verbs' RDMA provider 00:00:52.375 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:02.341 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:02.341 Creating mk/config.mk...done. 00:01:02.341 Creating mk/cc.flags.mk...done. 00:01:02.341 Type 'make' to build. 00:01:02.341 02:17:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:02.341 02:17:49 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:02.341 02:17:49 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:02.341 02:17:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.341 ************************************ 00:01:02.341 START TEST make 00:01:02.341 ************************************ 00:01:02.341 02:17:49 make -- common/autotest_common.sh@1121 -- $ make -j48 00:01:02.341 make[1]: Nothing to be done for 'all'. 00:01:03.733 The Meson build system 00:01:03.733 Version: 1.3.1 00:01:03.733 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:03.733 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.733 Build type: native build 00:01:03.733 Project name: libvfio-user 00:01:03.733 Project version: 0.0.1 00:01:03.733 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:03.733 C linker for the host machine: cc ld.bfd 2.39-16 00:01:03.733 Host machine cpu family: x86_64 00:01:03.733 Host machine cpu: x86_64 00:01:03.733 Run-time dependency threads found: YES 00:01:03.733 Library dl found: YES 00:01:03.733 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:03.733 Run-time dependency json-c found: YES 0.17 00:01:03.733 Run-time dependency cmocka found: YES 1.1.7 00:01:03.733 Program pytest-3 found: NO 00:01:03.733 Program flake8 found: NO 00:01:03.733 Program misspell-fixer found: NO 00:01:03.733 Program restructuredtext-lint found: NO 00:01:03.733 Program valgrind found: YES (/usr/bin/valgrind) 00:01:03.733 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:03.733 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:03.733 Compiler for C supports arguments -Wwrite-strings: YES 00:01:03.733 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.733 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:03.733 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:03.733 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.733 Build targets in project: 8 00:01:03.733 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:03.733 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:03.733 00:01:03.733 libvfio-user 0.0.1 00:01:03.733 00:01:03.733 User defined options 00:01:03.733 buildtype : debug 00:01:03.733 default_library: shared 00:01:03.733 libdir : /usr/local/lib 00:01:03.733 00:01:03.733 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.673 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.673 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:04.673 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:04.673 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:04.673 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:04.673 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:04.673 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:04.937 [7/37] Compiling C object samples/null.p/null.c.o 00:01:04.937 [8/37] Compiling C object samples/server.p/server.c.o 00:01:04.937 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:04.937 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:04.937 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:04.937 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:04.937 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:04.937 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:04.937 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:04.937 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:04.937 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:04.937 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:04.937 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:04.937 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:04.937 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:04.937 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:04.937 [23/37] Compiling C object samples/client.p/client.c.o 00:01:04.937 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:04.937 [25/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:04.937 [26/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:04.937 [27/37] Linking target samples/client 00:01:04.937 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:05.222 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:05.222 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:05.222 [31/37] Linking target test/unit_tests 00:01:05.222 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:05.222 [33/37] Linking target samples/server 00:01:05.222 [34/37] Linking target samples/lspci 00:01:05.222 [35/37] Linking target samples/gpio-pci-idio-16 00:01:05.222 [36/37] Linking target samples/null 00:01:05.482 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:05.482 INFO: autodetecting backend as ninja 00:01:05.482 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.482 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.056 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:06.056 ninja: no work to do. 00:01:11.403 The Meson build system 00:01:11.403 Version: 1.3.1 00:01:11.403 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:11.403 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:11.403 Build type: native build 00:01:11.403 Program cat found: YES (/usr/bin/cat) 00:01:11.403 Project name: DPDK 00:01:11.403 Project version: 23.11.0 00:01:11.403 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:11.403 C linker for the host machine: cc ld.bfd 2.39-16 00:01:11.403 Host machine cpu family: x86_64 00:01:11.403 Host machine cpu: x86_64 00:01:11.403 Message: ## Building in Developer Mode ## 00:01:11.403 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:11.403 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:11.403 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:11.403 Program python3 found: YES (/usr/bin/python3) 00:01:11.403 Program cat found: YES (/usr/bin/cat) 00:01:11.403 Compiler for C supports arguments -march=native: YES 00:01:11.403 Checking for size of "void *" : 8 00:01:11.403 Checking for size of "void *" : 8 (cached) 00:01:11.403 Library m found: YES 00:01:11.403 Library numa found: YES 00:01:11.403 Has header "numaif.h" : YES 00:01:11.403 Library fdt found: NO 00:01:11.403 Library execinfo found: NO 00:01:11.403 Has header "execinfo.h" : YES 00:01:11.403 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:11.403 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:11.403 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:11.403 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:11.403 Run-time dependency openssl found: YES 3.0.9 00:01:11.403 Run-time dependency libpcap found: YES 1.10.4 00:01:11.403 Has header "pcap.h" with dependency libpcap: YES 00:01:11.403 Compiler for C supports arguments -Wcast-qual: YES 00:01:11.403 Compiler for C supports arguments -Wdeprecated: YES 00:01:11.403 Compiler for C supports arguments -Wformat: YES 00:01:11.403 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:11.403 Compiler for C supports arguments -Wformat-security: NO 00:01:11.403 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:11.403 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:11.403 Compiler for C supports arguments -Wnested-externs: YES 00:01:11.403 Compiler for C supports arguments -Wold-style-definition: YES 00:01:11.403 Compiler for C supports arguments -Wpointer-arith: YES 00:01:11.403 Compiler for C supports arguments -Wsign-compare: YES 00:01:11.403 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:11.403 Compiler for C supports arguments -Wundef: YES 00:01:11.403 Compiler for C supports arguments -Wwrite-strings: YES 00:01:11.403 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:11.403 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:11.403 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:11.403 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:11.403 Program objdump found: YES (/usr/bin/objdump) 00:01:11.403 Compiler for C supports arguments -mavx512f: YES 00:01:11.403 Checking if "AVX512 checking" compiles: YES 00:01:11.403 Fetching value of define "__SSE4_2__" : 1 00:01:11.403 Fetching value of define "__AES__" : 1 00:01:11.403 Fetching value of define "__AVX__" : 1 00:01:11.403 Fetching value of define "__AVX2__" : (undefined) 00:01:11.403 Fetching value of define "__AVX512BW__" : (undefined) 00:01:11.403 Fetching value of define "__AVX512CD__" : (undefined) 00:01:11.403 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:11.403 Fetching value of define "__AVX512F__" : (undefined) 00:01:11.403 Fetching value of define "__AVX512VL__" : (undefined) 00:01:11.403 Fetching value of define "__PCLMUL__" : 1 00:01:11.403 Fetching value of define "__RDRND__" : 1 00:01:11.403 Fetching value of define "__RDSEED__" : (undefined) 00:01:11.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:11.403 Fetching value of define "__znver1__" : (undefined) 00:01:11.403 Fetching value of define "__znver2__" : (undefined) 00:01:11.403 Fetching value of define "__znver3__" : (undefined) 00:01:11.403 Fetching value of define "__znver4__" : (undefined) 00:01:11.403 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:11.403 Message: lib/log: Defining dependency "log" 00:01:11.403 Message: lib/kvargs: Defining dependency "kvargs" 00:01:11.403 Message: lib/telemetry: Defining dependency "telemetry" 00:01:11.403 Checking for function "getentropy" : NO 00:01:11.403 Message: lib/eal: Defining dependency "eal" 00:01:11.403 Message: lib/ring: Defining dependency "ring" 00:01:11.403 Message: lib/rcu: Defining dependency "rcu" 00:01:11.403 Message: lib/mempool: Defining dependency "mempool" 00:01:11.403 Message: lib/mbuf: Defining dependency "mbuf" 00:01:11.403 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:11.403 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:11.403 Compiler for C supports arguments -mpclmul: YES 00:01:11.403 Compiler for C supports arguments -maes: YES 00:01:11.403 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:11.403 Compiler for C supports arguments -mavx512bw: YES 00:01:11.403 Compiler for C supports arguments -mavx512dq: YES 00:01:11.403 Compiler for C supports arguments -mavx512vl: YES 00:01:11.403 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:11.403 Compiler for C supports arguments -mavx2: YES 00:01:11.403 Compiler for C supports arguments -mavx: YES 00:01:11.403 Message: lib/net: Defining dependency "net" 00:01:11.403 Message: lib/meter: Defining dependency "meter" 00:01:11.403 Message: lib/ethdev: Defining dependency "ethdev" 00:01:11.403 Message: lib/pci: Defining dependency "pci" 00:01:11.403 Message: lib/cmdline: Defining dependency "cmdline" 00:01:11.403 Message: lib/hash: Defining dependency "hash" 00:01:11.403 Message: lib/timer: Defining dependency "timer" 00:01:11.403 Message: lib/compressdev: Defining dependency "compressdev" 00:01:11.403 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:11.403 Message: lib/dmadev: Defining dependency "dmadev" 00:01:11.403 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:11.403 Message: lib/power: Defining dependency "power" 00:01:11.403 Message: lib/reorder: Defining dependency "reorder" 00:01:11.403 Message: lib/security: Defining dependency "security" 00:01:11.403 Has header "linux/userfaultfd.h" : YES 00:01:11.403 Has header "linux/vduse.h" : YES 00:01:11.403 Message: lib/vhost: Defining dependency "vhost" 00:01:11.403 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:11.403 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:11.403 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:11.403 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:11.403 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:11.403 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:11.403 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:11.403 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:11.403 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:11.403 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:11.403 Program doxygen found: YES (/usr/bin/doxygen) 00:01:11.403 Configuring doxy-api-html.conf using configuration 00:01:11.403 Configuring doxy-api-man.conf using configuration 00:01:11.403 Program mandb found: YES (/usr/bin/mandb) 00:01:11.403 Program sphinx-build found: NO 00:01:11.403 Configuring rte_build_config.h using configuration 00:01:11.403 Message: 00:01:11.403 ================= 00:01:11.403 Applications Enabled 00:01:11.403 ================= 00:01:11.403 00:01:11.403 apps: 00:01:11.403 00:01:11.403 00:01:11.403 Message: 00:01:11.403 ================= 00:01:11.403 Libraries Enabled 00:01:11.403 ================= 00:01:11.403 00:01:11.403 libs: 00:01:11.403 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:11.403 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:11.403 cryptodev, dmadev, power, reorder, security, vhost, 00:01:11.403 00:01:11.403 Message: 00:01:11.403 =============== 00:01:11.403 Drivers Enabled 00:01:11.403 =============== 00:01:11.403 00:01:11.403 common: 00:01:11.403 00:01:11.403 bus: 00:01:11.403 pci, vdev, 00:01:11.403 mempool: 00:01:11.403 ring, 00:01:11.403 dma: 00:01:11.403 00:01:11.403 net: 00:01:11.403 00:01:11.403 crypto: 00:01:11.403 00:01:11.403 compress: 00:01:11.403 00:01:11.403 vdpa: 00:01:11.403 00:01:11.403 00:01:11.403 Message: 00:01:11.403 ================= 00:01:11.403 Content Skipped 00:01:11.403 ================= 00:01:11.403 00:01:11.403 apps: 00:01:11.403 dumpcap: explicitly disabled via build config 00:01:11.403 graph: explicitly disabled via build config 00:01:11.403 pdump: explicitly disabled via build config 00:01:11.403 proc-info: explicitly disabled via build config 00:01:11.403 test-acl: explicitly disabled via build config 00:01:11.403 test-bbdev: explicitly disabled via build config 00:01:11.403 test-cmdline: explicitly disabled via build config 00:01:11.403 test-compress-perf: explicitly disabled via build config 00:01:11.403 test-crypto-perf: explicitly disabled via build config 00:01:11.403 test-dma-perf: explicitly disabled via build config 00:01:11.403 test-eventdev: explicitly disabled via build config 00:01:11.404 test-fib: explicitly disabled via build config 00:01:11.404 test-flow-perf: explicitly disabled via build config 00:01:11.404 test-gpudev: explicitly disabled via build config 00:01:11.404 test-mldev: explicitly disabled via build config 00:01:11.404 test-pipeline: explicitly disabled via build config 00:01:11.404 test-pmd: explicitly disabled via build config 00:01:11.404 test-regex: explicitly disabled via build config 00:01:11.404 test-sad: explicitly disabled via build config 00:01:11.404 test-security-perf: explicitly disabled via build config 00:01:11.404 00:01:11.404 libs: 00:01:11.404 metrics: explicitly disabled via build config 00:01:11.404 acl: explicitly disabled via build config 00:01:11.404 bbdev: explicitly disabled via build config 00:01:11.404 bitratestats: explicitly disabled via build config 00:01:11.404 bpf: explicitly disabled via build config 00:01:11.404 cfgfile: explicitly disabled via build config 00:01:11.404 distributor: explicitly disabled via build config 00:01:11.404 efd: explicitly disabled via build config 00:01:11.404 eventdev: explicitly disabled via build config 00:01:11.404 dispatcher: explicitly disabled via build config 00:01:11.404 gpudev: explicitly disabled via build config 00:01:11.404 gro: explicitly disabled via build config 00:01:11.404 gso: explicitly disabled via build config 00:01:11.404 ip_frag: explicitly disabled via build config 00:01:11.404 jobstats: explicitly disabled via build config 00:01:11.404 latencystats: explicitly disabled via build config 00:01:11.404 lpm: explicitly disabled via build config 00:01:11.404 member: explicitly disabled via build config 00:01:11.404 pcapng: explicitly disabled via build config 00:01:11.404 rawdev: explicitly disabled via build config 00:01:11.404 regexdev: explicitly disabled via build config 00:01:11.404 mldev: explicitly disabled via build config 00:01:11.404 rib: explicitly disabled via build config 00:01:11.404 sched: explicitly disabled via build config 00:01:11.404 stack: explicitly disabled via build config 00:01:11.404 ipsec: explicitly disabled via build config 00:01:11.404 pdcp: explicitly disabled via build config 00:01:11.404 fib: explicitly disabled via build config 00:01:11.404 port: explicitly disabled via build config 00:01:11.404 pdump: explicitly disabled via build config 00:01:11.404 table: explicitly disabled via build config 00:01:11.404 pipeline: explicitly disabled via build config 00:01:11.404 graph: explicitly disabled via build config 00:01:11.404 node: explicitly disabled via build config 00:01:11.404 00:01:11.404 drivers: 00:01:11.404 common/cpt: not in enabled drivers build config 00:01:11.404 common/dpaax: not in enabled drivers build config 00:01:11.404 common/iavf: not in enabled drivers build config 00:01:11.404 common/idpf: not in enabled drivers build config 00:01:11.404 common/mvep: not in enabled drivers build config 00:01:11.404 common/octeontx: not in enabled drivers build config 00:01:11.404 bus/auxiliary: not in enabled drivers build config 00:01:11.404 bus/cdx: not in enabled drivers build config 00:01:11.404 bus/dpaa: not in enabled drivers build config 00:01:11.404 bus/fslmc: not in enabled drivers build config 00:01:11.404 bus/ifpga: not in enabled drivers build config 00:01:11.404 bus/platform: not in enabled drivers build config 00:01:11.404 bus/vmbus: not in enabled drivers build config 00:01:11.404 common/cnxk: not in enabled drivers build config 00:01:11.404 common/mlx5: not in enabled drivers build config 00:01:11.404 common/nfp: not in enabled drivers build config 00:01:11.404 common/qat: not in enabled drivers build config 00:01:11.404 common/sfc_efx: not in enabled drivers build config 00:01:11.404 mempool/bucket: not in enabled drivers build config 00:01:11.404 mempool/cnxk: not in enabled drivers build config 00:01:11.404 mempool/dpaa: not in enabled drivers build config 00:01:11.404 mempool/dpaa2: not in enabled drivers build config 00:01:11.404 mempool/octeontx: not in enabled drivers build config 00:01:11.404 mempool/stack: not in enabled drivers build config 00:01:11.404 dma/cnxk: not in enabled drivers build config 00:01:11.404 dma/dpaa: not in enabled drivers build config 00:01:11.404 dma/dpaa2: not in enabled drivers build config 00:01:11.404 dma/hisilicon: not in enabled drivers build config 00:01:11.404 dma/idxd: not in enabled drivers build config 00:01:11.404 dma/ioat: not in enabled drivers build config 00:01:11.404 dma/skeleton: not in enabled drivers build config 00:01:11.404 net/af_packet: not in enabled drivers build config 00:01:11.404 net/af_xdp: not in enabled drivers build config 00:01:11.404 net/ark: not in enabled drivers build config 00:01:11.404 net/atlantic: not in enabled drivers build config 00:01:11.404 net/avp: not in enabled drivers build config 00:01:11.404 net/axgbe: not in enabled drivers build config 00:01:11.404 net/bnx2x: not in enabled drivers build config 00:01:11.404 net/bnxt: not in enabled drivers build config 00:01:11.404 net/bonding: not in enabled drivers build config 00:01:11.404 net/cnxk: not in enabled drivers build config 00:01:11.404 net/cpfl: not in enabled drivers build config 00:01:11.404 net/cxgbe: not in enabled drivers build config 00:01:11.404 net/dpaa: not in enabled drivers build config 00:01:11.404 net/dpaa2: not in enabled drivers build config 00:01:11.404 net/e1000: not in enabled drivers build config 00:01:11.404 net/ena: not in enabled drivers build config 00:01:11.404 net/enetc: not in enabled drivers build config 00:01:11.404 net/enetfec: not in enabled drivers build config 00:01:11.404 net/enic: not in enabled drivers build config 00:01:11.404 net/failsafe: not in enabled drivers build config 00:01:11.404 net/fm10k: not in enabled drivers build config 00:01:11.404 net/gve: not in enabled drivers build config 00:01:11.404 net/hinic: not in enabled drivers build config 00:01:11.404 net/hns3: not in enabled drivers build config 00:01:11.404 net/i40e: not in enabled drivers build config 00:01:11.404 net/iavf: not in enabled drivers build config 00:01:11.404 net/ice: not in enabled drivers build config 00:01:11.404 net/idpf: not in enabled drivers build config 00:01:11.404 net/igc: not in enabled drivers build config 00:01:11.404 net/ionic: not in enabled drivers build config 00:01:11.404 net/ipn3ke: not in enabled drivers build config 00:01:11.404 net/ixgbe: not in enabled drivers build config 00:01:11.404 net/mana: not in enabled drivers build config 00:01:11.404 net/memif: not in enabled drivers build config 00:01:11.404 net/mlx4: not in enabled drivers build config 00:01:11.404 net/mlx5: not in enabled drivers build config 00:01:11.404 net/mvneta: not in enabled drivers build config 00:01:11.404 net/mvpp2: not in enabled drivers build config 00:01:11.404 net/netvsc: not in enabled drivers build config 00:01:11.404 net/nfb: not in enabled drivers build config 00:01:11.404 net/nfp: not in enabled drivers build config 00:01:11.404 net/ngbe: not in enabled drivers build config 00:01:11.404 net/null: not in enabled drivers build config 00:01:11.404 net/octeontx: not in enabled drivers build config 00:01:11.404 net/octeon_ep: not in enabled drivers build config 00:01:11.404 net/pcap: not in enabled drivers build config 00:01:11.404 net/pfe: not in enabled drivers build config 00:01:11.404 net/qede: not in enabled drivers build config 00:01:11.404 net/ring: not in enabled drivers build config 00:01:11.404 net/sfc: not in enabled drivers build config 00:01:11.404 net/softnic: not in enabled drivers build config 00:01:11.404 net/tap: not in enabled drivers build config 00:01:11.404 net/thunderx: not in enabled drivers build config 00:01:11.404 net/txgbe: not in enabled drivers build config 00:01:11.404 net/vdev_netvsc: not in enabled drivers build config 00:01:11.404 net/vhost: not in enabled drivers build config 00:01:11.404 net/virtio: not in enabled drivers build config 00:01:11.404 net/vmxnet3: not in enabled drivers build config 00:01:11.404 raw/*: missing internal dependency, "rawdev" 00:01:11.404 crypto/armv8: not in enabled drivers build config 00:01:11.404 crypto/bcmfs: not in enabled drivers build config 00:01:11.404 crypto/caam_jr: not in enabled drivers build config 00:01:11.404 crypto/ccp: not in enabled drivers build config 00:01:11.404 crypto/cnxk: not in enabled drivers build config 00:01:11.404 crypto/dpaa_sec: not in enabled drivers build config 00:01:11.404 crypto/dpaa2_sec: not in enabled drivers build config 00:01:11.404 crypto/ipsec_mb: not in enabled drivers build config 00:01:11.404 crypto/mlx5: not in enabled drivers build config 00:01:11.404 crypto/mvsam: not in enabled drivers build config 00:01:11.404 crypto/nitrox: not in enabled drivers build config 00:01:11.404 crypto/null: not in enabled drivers build config 00:01:11.404 crypto/octeontx: not in enabled drivers build config 00:01:11.404 crypto/openssl: not in enabled drivers build config 00:01:11.404 crypto/scheduler: not in enabled drivers build config 00:01:11.404 crypto/uadk: not in enabled drivers build config 00:01:11.404 crypto/virtio: not in enabled drivers build config 00:01:11.404 compress/isal: not in enabled drivers build config 00:01:11.404 compress/mlx5: not in enabled drivers build config 00:01:11.404 compress/octeontx: not in enabled drivers build config 00:01:11.404 compress/zlib: not in enabled drivers build config 00:01:11.404 regex/*: missing internal dependency, "regexdev" 00:01:11.404 ml/*: missing internal dependency, "mldev" 00:01:11.404 vdpa/ifc: not in enabled drivers build config 00:01:11.404 vdpa/mlx5: not in enabled drivers build config 00:01:11.404 vdpa/nfp: not in enabled drivers build config 00:01:11.404 vdpa/sfc: not in enabled drivers build config 00:01:11.404 event/*: missing internal dependency, "eventdev" 00:01:11.404 baseband/*: missing internal dependency, "bbdev" 00:01:11.404 gpu/*: missing internal dependency, "gpudev" 00:01:11.404 00:01:11.404 00:01:11.404 Build targets in project: 85 00:01:11.404 00:01:11.404 DPDK 23.11.0 00:01:11.404 00:01:11.404 User defined options 00:01:11.404 buildtype : debug 00:01:11.404 default_library : shared 00:01:11.404 libdir : lib 00:01:11.404 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.404 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:11.404 c_link_args : 00:01:11.404 cpu_instruction_set: native 00:01:11.404 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:11.404 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:11.404 enable_docs : false 00:01:11.404 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:11.404 enable_kmods : false 00:01:11.404 tests : false 00:01:11.404 00:01:11.404 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.404 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.665 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.665 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.665 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.665 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:11.665 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.665 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.665 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.665 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:11.665 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:11.665 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.665 [11/265] Linking static target lib/librte_kvargs.a 00:01:11.665 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.665 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:11.665 [14/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.665 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:11.665 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.665 [17/265] Linking static target lib/librte_log.a 00:01:11.665 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:11.665 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:11.927 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:11.927 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:12.188 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.454 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:12.454 [24/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.454 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:12.454 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.454 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.454 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:12.454 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:12.454 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.454 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.454 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.454 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:12.454 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:12.454 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:12.454 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:12.454 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.454 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.454 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:12.454 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:12.454 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:12.454 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:12.454 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:12.454 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:12.454 [45/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:12.454 [46/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:12.454 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:12.454 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:12.454 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.454 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:12.454 [51/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:12.454 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:12.454 [53/265] Linking static target lib/librte_telemetry.a 00:01:12.454 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.454 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:12.717 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:12.717 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:12.717 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:12.717 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.717 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:12.717 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.717 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:12.717 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:12.717 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:12.717 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:12.717 [66/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:12.717 [67/265] Linking static target lib/librte_pci.a 00:01:12.717 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:12.717 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:12.717 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:12.717 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:12.977 [72/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.977 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:12.977 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:12.977 [75/265] Linking target lib/librte_log.so.24.0 00:01:12.977 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:12.977 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:12.977 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:12.977 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:12.977 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:12.977 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:13.240 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:13.240 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:13.240 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:13.240 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:13.240 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:13.240 [87/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.240 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:13.240 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:13.240 [90/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:13.240 [91/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:13.501 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:13.501 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:13.501 [94/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:13.501 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:13.501 [96/265] Linking target lib/librte_kvargs.so.24.0 00:01:13.501 [97/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:13.501 [98/265] Linking static target lib/librte_eal.a 00:01:13.501 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:13.501 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:13.501 [101/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:13.501 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:13.501 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:13.501 [104/265] Linking static target lib/librte_ring.a 00:01:13.501 [105/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:13.501 [106/265] Linking static target lib/librte_meter.a 00:01:13.501 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:13.501 [108/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.501 [109/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:13.501 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:13.760 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:13.760 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:13.760 [113/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:13.760 [114/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:13.760 [115/265] Linking target lib/librte_telemetry.so.24.0 00:01:13.760 [116/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:13.760 [117/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:13.760 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:13.760 [119/265] Linking static target lib/librte_mempool.a 00:01:13.760 [120/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:13.760 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:13.760 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:13.760 [123/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:13.760 [124/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:13.760 [125/265] Linking static target lib/librte_rcu.a 00:01:13.760 [126/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:13.760 [127/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:13.760 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:13.760 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:13.760 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:14.022 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:14.022 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:14.022 [133/265] Linking static target lib/librte_cmdline.a 00:01:14.022 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:14.022 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:14.022 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:14.022 [137/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:14.022 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:14.022 [139/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:14.022 [140/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.022 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:14.289 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:14.289 [143/265] Linking static target lib/librte_net.a 00:01:14.289 [144/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:14.289 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:14.289 [146/265] Linking static target lib/librte_timer.a 00:01:14.289 [147/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.289 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:14.289 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:14.548 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.548 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:14.548 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:14.548 [153/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:14.548 [154/265] Linking static target lib/librte_dmadev.a 00:01:14.548 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:14.548 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:14.548 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:14.548 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:14.548 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:14.548 [160/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.806 [161/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:14.806 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.806 [163/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:14.806 [164/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:14.806 [165/265] Linking static target lib/librte_hash.a 00:01:14.806 [166/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:14.806 [167/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.806 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:14.806 [169/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:14.807 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:14.807 [171/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:14.807 [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:14.807 [173/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:14.807 [174/265] Linking static target lib/librte_compressdev.a 00:01:14.807 [175/265] Linking static target lib/librte_power.a 00:01:14.807 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.807 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:15.065 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:15.065 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:15.065 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:15.065 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:15.065 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:15.065 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:15.065 [184/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:15.065 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:15.065 [186/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.065 [187/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:15.065 [188/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:15.065 [189/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:15.065 [190/265] Linking static target lib/librte_reorder.a 00:01:15.324 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:15.324 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:15.324 [193/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:15.324 [194/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:15.324 [195/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:15.324 [196/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:15.324 [197/265] Linking static target lib/librte_mbuf.a 00:01:15.324 [198/265] Linking static target drivers/librte_bus_pci.a 00:01:15.324 [199/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.324 [200/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.324 [201/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:15.324 [202/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:15.324 [203/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:15.324 [204/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:15.324 [205/265] Linking static target drivers/librte_bus_vdev.a 00:01:15.324 [206/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:15.324 [207/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:15.324 [208/265] Linking static target lib/librte_security.a 00:01:15.324 [209/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.324 [210/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.582 [211/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:15.582 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:15.582 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:15.582 [214/265] Linking static target drivers/librte_mempool_ring.a 00:01:15.582 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.582 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:15.582 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:15.582 [218/265] Linking static target lib/librte_ethdev.a 00:01:15.582 [219/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:15.582 [220/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.582 [221/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.582 [222/265] Linking static target lib/librte_cryptodev.a 00:01:15.582 [223/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.956 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.889 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:19.788 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.788 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.788 [228/265] Linking target lib/librte_eal.so.24.0 00:01:20.046 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:20.046 [230/265] Linking target lib/librte_ring.so.24.0 00:01:20.046 [231/265] Linking target lib/librte_pci.so.24.0 00:01:20.046 [232/265] Linking target lib/librte_timer.so.24.0 00:01:20.046 [233/265] Linking target lib/librte_meter.so.24.0 00:01:20.046 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:20.046 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:20.046 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:20.046 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:20.046 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:20.046 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:20.046 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:20.304 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:20.304 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:20.304 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:20.304 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:20.304 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:20.304 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:20.304 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:20.562 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:20.562 [249/265] Linking target lib/librte_compressdev.so.24.0 00:01:20.562 [250/265] Linking target lib/librte_reorder.so.24.0 00:01:20.562 [251/265] Linking target lib/librte_net.so.24.0 00:01:20.562 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:20.562 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:20.562 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:20.820 [255/265] Linking target lib/librte_hash.so.24.0 00:01:20.820 [256/265] Linking target lib/librte_security.so.24.0 00:01:20.820 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:20.820 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:20.820 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:20.820 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:20.820 [261/265] Linking target lib/librte_power.so.24.0 00:01:23.347 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:23.347 [263/265] Linking static target lib/librte_vhost.a 00:01:24.720 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.720 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:24.720 INFO: autodetecting backend as ninja 00:01:24.720 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:25.655 CC lib/ut_mock/mock.o 00:01:25.655 CC lib/log/log.o 00:01:25.655 CC lib/ut/ut.o 00:01:25.655 CC lib/log/log_flags.o 00:01:25.655 CC lib/log/log_deprecated.o 00:01:25.655 LIB libspdk_ut_mock.a 00:01:25.655 SO libspdk_ut_mock.so.6.0 00:01:25.655 LIB libspdk_log.a 00:01:25.655 LIB libspdk_ut.a 00:01:25.655 SO libspdk_ut.so.2.0 00:01:25.655 SO libspdk_log.so.7.0 00:01:25.655 SYMLINK libspdk_ut_mock.so 00:01:25.655 SYMLINK libspdk_ut.so 00:01:25.655 SYMLINK libspdk_log.so 00:01:25.913 CC lib/util/base64.o 00:01:25.913 CC lib/util/bit_array.o 00:01:25.913 CC lib/util/cpuset.o 00:01:25.913 CC lib/dma/dma.o 00:01:25.913 CC lib/util/crc16.o 00:01:25.913 CC lib/util/crc32.o 00:01:25.913 CXX lib/trace_parser/trace.o 00:01:25.913 CC lib/ioat/ioat.o 00:01:25.913 CC lib/util/crc32c.o 00:01:25.913 CC lib/util/crc32_ieee.o 00:01:25.913 CC lib/util/crc64.o 00:01:25.913 CC lib/util/dif.o 00:01:25.913 CC lib/util/fd.o 00:01:25.913 CC lib/util/file.o 00:01:25.913 CC lib/util/hexlify.o 00:01:25.913 CC lib/util/iov.o 00:01:25.913 CC lib/util/math.o 00:01:25.913 CC lib/util/pipe.o 00:01:25.913 CC lib/util/strerror_tls.o 00:01:25.913 CC lib/util/string.o 00:01:25.913 CC lib/util/uuid.o 00:01:25.913 CC lib/util/fd_group.o 00:01:25.913 CC lib/util/xor.o 00:01:25.913 CC lib/util/zipf.o 00:01:26.170 CC lib/vfio_user/host/vfio_user_pci.o 00:01:26.170 CC lib/vfio_user/host/vfio_user.o 00:01:26.170 LIB libspdk_dma.a 00:01:26.170 SO libspdk_dma.so.4.0 00:01:26.170 LIB libspdk_ioat.a 00:01:26.170 SYMLINK libspdk_dma.so 00:01:26.170 SO libspdk_ioat.so.7.0 00:01:26.429 SYMLINK libspdk_ioat.so 00:01:26.429 LIB libspdk_vfio_user.a 00:01:26.429 SO libspdk_vfio_user.so.5.0 00:01:26.429 SYMLINK libspdk_vfio_user.so 00:01:26.429 LIB libspdk_util.a 00:01:26.429 SO libspdk_util.so.9.0 00:01:26.725 SYMLINK libspdk_util.so 00:01:27.014 CC lib/rdma/common.o 00:01:27.015 CC lib/idxd/idxd.o 00:01:27.015 CC lib/vmd/vmd.o 00:01:27.015 CC lib/rdma/rdma_verbs.o 00:01:27.015 CC lib/json/json_parse.o 00:01:27.015 CC lib/idxd/idxd_user.o 00:01:27.015 CC lib/vmd/led.o 00:01:27.015 CC lib/json/json_util.o 00:01:27.015 CC lib/json/json_write.o 00:01:27.015 CC lib/conf/conf.o 00:01:27.015 CC lib/env_dpdk/env.o 00:01:27.015 CC lib/env_dpdk/memory.o 00:01:27.015 CC lib/env_dpdk/pci.o 00:01:27.015 CC lib/env_dpdk/init.o 00:01:27.015 CC lib/env_dpdk/threads.o 00:01:27.015 CC lib/env_dpdk/pci_ioat.o 00:01:27.015 CC lib/env_dpdk/pci_virtio.o 00:01:27.015 CC lib/env_dpdk/pci_vmd.o 00:01:27.015 CC lib/env_dpdk/pci_idxd.o 00:01:27.015 CC lib/env_dpdk/pci_event.o 00:01:27.015 CC lib/env_dpdk/sigbus_handler.o 00:01:27.015 CC lib/env_dpdk/pci_dpdk.o 00:01:27.015 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:27.015 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:27.015 LIB libspdk_trace_parser.a 00:01:27.015 SO libspdk_trace_parser.so.5.0 00:01:27.015 SYMLINK libspdk_trace_parser.so 00:01:27.015 LIB libspdk_conf.a 00:01:27.015 SO libspdk_conf.so.6.0 00:01:27.273 SYMLINK libspdk_conf.so 00:01:27.273 LIB libspdk_rdma.a 00:01:27.273 LIB libspdk_json.a 00:01:27.273 SO libspdk_rdma.so.6.0 00:01:27.273 SO libspdk_json.so.6.0 00:01:27.273 SYMLINK libspdk_rdma.so 00:01:27.273 SYMLINK libspdk_json.so 00:01:27.532 LIB libspdk_idxd.a 00:01:27.532 SO libspdk_idxd.so.12.0 00:01:27.532 CC lib/jsonrpc/jsonrpc_server.o 00:01:27.532 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:27.532 CC lib/jsonrpc/jsonrpc_client.o 00:01:27.532 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:27.532 SYMLINK libspdk_idxd.so 00:01:27.532 LIB libspdk_vmd.a 00:01:27.532 SO libspdk_vmd.so.6.0 00:01:27.532 SYMLINK libspdk_vmd.so 00:01:27.790 LIB libspdk_jsonrpc.a 00:01:27.790 SO libspdk_jsonrpc.so.6.0 00:01:27.790 SYMLINK libspdk_jsonrpc.so 00:01:28.048 CC lib/rpc/rpc.o 00:01:28.306 LIB libspdk_rpc.a 00:01:28.306 SO libspdk_rpc.so.6.0 00:01:28.306 SYMLINK libspdk_rpc.so 00:01:28.564 CC lib/keyring/keyring.o 00:01:28.564 CC lib/notify/notify.o 00:01:28.564 CC lib/keyring/keyring_rpc.o 00:01:28.564 CC lib/notify/notify_rpc.o 00:01:28.564 CC lib/trace/trace.o 00:01:28.564 CC lib/trace/trace_flags.o 00:01:28.564 CC lib/trace/trace_rpc.o 00:01:28.564 LIB libspdk_notify.a 00:01:28.564 SO libspdk_notify.so.6.0 00:01:28.564 LIB libspdk_keyring.a 00:01:28.564 SYMLINK libspdk_notify.so 00:01:28.822 LIB libspdk_trace.a 00:01:28.822 SO libspdk_keyring.so.1.0 00:01:28.822 SO libspdk_trace.so.10.0 00:01:28.822 SYMLINK libspdk_keyring.so 00:01:28.822 SYMLINK libspdk_trace.so 00:01:28.822 LIB libspdk_env_dpdk.a 00:01:29.080 CC lib/thread/thread.o 00:01:29.080 CC lib/thread/iobuf.o 00:01:29.080 CC lib/sock/sock.o 00:01:29.080 CC lib/sock/sock_rpc.o 00:01:29.080 SO libspdk_env_dpdk.so.14.0 00:01:29.080 SYMLINK libspdk_env_dpdk.so 00:01:29.358 LIB libspdk_sock.a 00:01:29.358 SO libspdk_sock.so.9.0 00:01:29.358 SYMLINK libspdk_sock.so 00:01:29.616 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:29.616 CC lib/nvme/nvme_ctrlr.o 00:01:29.616 CC lib/nvme/nvme_fabric.o 00:01:29.616 CC lib/nvme/nvme_ns_cmd.o 00:01:29.616 CC lib/nvme/nvme_ns.o 00:01:29.616 CC lib/nvme/nvme_pcie_common.o 00:01:29.616 CC lib/nvme/nvme_pcie.o 00:01:29.616 CC lib/nvme/nvme_qpair.o 00:01:29.616 CC lib/nvme/nvme.o 00:01:29.616 CC lib/nvme/nvme_quirks.o 00:01:29.616 CC lib/nvme/nvme_transport.o 00:01:29.616 CC lib/nvme/nvme_discovery.o 00:01:29.616 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:29.616 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:29.616 CC lib/nvme/nvme_tcp.o 00:01:29.616 CC lib/nvme/nvme_opal.o 00:01:29.616 CC lib/nvme/nvme_io_msg.o 00:01:29.616 CC lib/nvme/nvme_poll_group.o 00:01:29.616 CC lib/nvme/nvme_zns.o 00:01:29.616 CC lib/nvme/nvme_stubs.o 00:01:29.616 CC lib/nvme/nvme_auth.o 00:01:29.616 CC lib/nvme/nvme_cuse.o 00:01:29.616 CC lib/nvme/nvme_vfio_user.o 00:01:29.616 CC lib/nvme/nvme_rdma.o 00:01:30.549 LIB libspdk_thread.a 00:01:30.549 SO libspdk_thread.so.10.0 00:01:30.549 SYMLINK libspdk_thread.so 00:01:30.806 CC lib/virtio/virtio.o 00:01:30.806 CC lib/vfu_tgt/tgt_endpoint.o 00:01:30.806 CC lib/init/json_config.o 00:01:30.806 CC lib/virtio/virtio_vhost_user.o 00:01:30.806 CC lib/blob/blobstore.o 00:01:30.806 CC lib/vfu_tgt/tgt_rpc.o 00:01:30.806 CC lib/init/subsystem.o 00:01:30.806 CC lib/accel/accel.o 00:01:30.806 CC lib/blob/request.o 00:01:30.806 CC lib/virtio/virtio_vfio_user.o 00:01:30.806 CC lib/accel/accel_rpc.o 00:01:30.806 CC lib/init/subsystem_rpc.o 00:01:30.806 CC lib/blob/zeroes.o 00:01:30.806 CC lib/virtio/virtio_pci.o 00:01:30.806 CC lib/init/rpc.o 00:01:30.806 CC lib/blob/blob_bs_dev.o 00:01:30.806 CC lib/accel/accel_sw.o 00:01:31.064 LIB libspdk_init.a 00:01:31.064 SO libspdk_init.so.5.0 00:01:31.064 LIB libspdk_vfu_tgt.a 00:01:31.064 LIB libspdk_virtio.a 00:01:31.064 SYMLINK libspdk_init.so 00:01:31.064 SO libspdk_vfu_tgt.so.3.0 00:01:31.064 SO libspdk_virtio.so.7.0 00:01:31.064 SYMLINK libspdk_vfu_tgt.so 00:01:31.321 SYMLINK libspdk_virtio.so 00:01:31.321 CC lib/event/app.o 00:01:31.321 CC lib/event/reactor.o 00:01:31.321 CC lib/event/log_rpc.o 00:01:31.321 CC lib/event/app_rpc.o 00:01:31.321 CC lib/event/scheduler_static.o 00:01:31.578 LIB libspdk_event.a 00:01:31.836 SO libspdk_event.so.13.0 00:01:31.836 SYMLINK libspdk_event.so 00:01:31.836 LIB libspdk_accel.a 00:01:31.836 SO libspdk_accel.so.15.0 00:01:31.836 SYMLINK libspdk_accel.so 00:01:31.836 LIB libspdk_nvme.a 00:01:32.092 CC lib/bdev/bdev.o 00:01:32.092 CC lib/bdev/bdev_rpc.o 00:01:32.092 CC lib/bdev/bdev_zone.o 00:01:32.092 CC lib/bdev/part.o 00:01:32.092 CC lib/bdev/scsi_nvme.o 00:01:32.092 SO libspdk_nvme.so.13.0 00:01:32.350 SYMLINK libspdk_nvme.so 00:01:33.721 LIB libspdk_blob.a 00:01:33.721 SO libspdk_blob.so.11.0 00:01:33.721 SYMLINK libspdk_blob.so 00:01:33.979 CC lib/blobfs/blobfs.o 00:01:33.979 CC lib/blobfs/tree.o 00:01:33.979 CC lib/lvol/lvol.o 00:01:34.544 LIB libspdk_bdev.a 00:01:34.544 SO libspdk_bdev.so.15.0 00:01:34.802 LIB libspdk_blobfs.a 00:01:34.802 SYMLINK libspdk_bdev.so 00:01:34.802 SO libspdk_blobfs.so.10.0 00:01:34.802 SYMLINK libspdk_blobfs.so 00:01:34.802 LIB libspdk_lvol.a 00:01:34.802 SO libspdk_lvol.so.10.0 00:01:35.068 CC lib/ublk/ublk.o 00:01:35.068 CC lib/nbd/nbd.o 00:01:35.068 CC lib/ublk/ublk_rpc.o 00:01:35.068 CC lib/nbd/nbd_rpc.o 00:01:35.068 CC lib/scsi/dev.o 00:01:35.068 CC lib/ftl/ftl_core.o 00:01:35.068 CC lib/nvmf/ctrlr.o 00:01:35.068 CC lib/scsi/lun.o 00:01:35.068 CC lib/ftl/ftl_init.o 00:01:35.068 CC lib/nvmf/ctrlr_discovery.o 00:01:35.068 CC lib/scsi/port.o 00:01:35.068 CC lib/ftl/ftl_layout.o 00:01:35.068 CC lib/nvmf/ctrlr_bdev.o 00:01:35.068 CC lib/scsi/scsi.o 00:01:35.068 CC lib/ftl/ftl_debug.o 00:01:35.068 CC lib/scsi/scsi_bdev.o 00:01:35.068 CC lib/ftl/ftl_io.o 00:01:35.068 CC lib/nvmf/subsystem.o 00:01:35.068 CC lib/scsi/scsi_pr.o 00:01:35.068 CC lib/nvmf/nvmf.o 00:01:35.068 CC lib/ftl/ftl_sb.o 00:01:35.068 CC lib/scsi/scsi_rpc.o 00:01:35.068 CC lib/scsi/task.o 00:01:35.068 CC lib/ftl/ftl_l2p.o 00:01:35.068 CC lib/ftl/ftl_l2p_flat.o 00:01:35.068 CC lib/nvmf/nvmf_rpc.o 00:01:35.068 CC lib/nvmf/transport.o 00:01:35.068 CC lib/nvmf/tcp.o 00:01:35.068 CC lib/ftl/ftl_band.o 00:01:35.068 CC lib/ftl/ftl_nv_cache.o 00:01:35.068 CC lib/nvmf/stubs.o 00:01:35.068 CC lib/nvmf/vfio_user.o 00:01:35.068 CC lib/ftl/ftl_writer.o 00:01:35.068 CC lib/ftl/ftl_band_ops.o 00:01:35.068 CC lib/nvmf/rdma.o 00:01:35.068 CC lib/ftl/ftl_rq.o 00:01:35.068 CC lib/nvmf/auth.o 00:01:35.068 CC lib/ftl/ftl_reloc.o 00:01:35.068 CC lib/ftl/ftl_l2p_cache.o 00:01:35.068 CC lib/ftl/ftl_p2l.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:35.068 SYMLINK libspdk_lvol.so 00:01:35.068 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:35.327 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:35.327 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:35.327 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:35.327 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:35.327 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:35.327 CC lib/ftl/utils/ftl_conf.o 00:01:35.327 CC lib/ftl/utils/ftl_md.o 00:01:35.327 CC lib/ftl/utils/ftl_mempool.o 00:01:35.327 CC lib/ftl/utils/ftl_bitmap.o 00:01:35.327 CC lib/ftl/utils/ftl_property.o 00:01:35.327 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:35.327 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:35.327 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:35.327 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:35.327 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:35.327 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:35.327 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:35.327 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:35.585 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:35.585 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:35.585 CC lib/ftl/base/ftl_base_dev.o 00:01:35.585 CC lib/ftl/base/ftl_base_bdev.o 00:01:35.585 CC lib/ftl/ftl_trace.o 00:01:35.843 LIB libspdk_nbd.a 00:01:35.843 SO libspdk_nbd.so.7.0 00:01:35.843 LIB libspdk_scsi.a 00:01:35.843 SYMLINK libspdk_nbd.so 00:01:35.843 SO libspdk_scsi.so.9.0 00:01:36.100 SYMLINK libspdk_scsi.so 00:01:36.100 LIB libspdk_ublk.a 00:01:36.100 SO libspdk_ublk.so.3.0 00:01:36.100 SYMLINK libspdk_ublk.so 00:01:36.100 CC lib/vhost/vhost.o 00:01:36.101 CC lib/iscsi/conn.o 00:01:36.101 CC lib/vhost/vhost_rpc.o 00:01:36.101 CC lib/iscsi/init_grp.o 00:01:36.101 CC lib/vhost/vhost_scsi.o 00:01:36.101 CC lib/iscsi/iscsi.o 00:01:36.101 CC lib/iscsi/md5.o 00:01:36.101 CC lib/vhost/vhost_blk.o 00:01:36.101 CC lib/iscsi/param.o 00:01:36.101 CC lib/vhost/rte_vhost_user.o 00:01:36.101 CC lib/iscsi/portal_grp.o 00:01:36.101 CC lib/iscsi/tgt_node.o 00:01:36.101 CC lib/iscsi/iscsi_subsystem.o 00:01:36.101 CC lib/iscsi/iscsi_rpc.o 00:01:36.101 CC lib/iscsi/task.o 00:01:36.359 LIB libspdk_ftl.a 00:01:36.359 SO libspdk_ftl.so.9.0 00:01:36.926 SYMLINK libspdk_ftl.so 00:01:37.492 LIB libspdk_vhost.a 00:01:37.492 SO libspdk_vhost.so.8.0 00:01:37.492 LIB libspdk_nvmf.a 00:01:37.492 SYMLINK libspdk_vhost.so 00:01:37.492 SO libspdk_nvmf.so.18.0 00:01:37.492 LIB libspdk_iscsi.a 00:01:37.752 SO libspdk_iscsi.so.8.0 00:01:37.752 SYMLINK libspdk_nvmf.so 00:01:37.752 SYMLINK libspdk_iscsi.so 00:01:38.014 CC module/env_dpdk/env_dpdk_rpc.o 00:01:38.014 CC module/vfu_device/vfu_virtio.o 00:01:38.014 CC module/vfu_device/vfu_virtio_blk.o 00:01:38.014 CC module/vfu_device/vfu_virtio_scsi.o 00:01:38.014 CC module/vfu_device/vfu_virtio_rpc.o 00:01:38.271 CC module/accel/ioat/accel_ioat.o 00:01:38.271 CC module/accel/error/accel_error.o 00:01:38.271 CC module/scheduler/gscheduler/gscheduler.o 00:01:38.271 CC module/accel/ioat/accel_ioat_rpc.o 00:01:38.271 CC module/accel/error/accel_error_rpc.o 00:01:38.271 CC module/blob/bdev/blob_bdev.o 00:01:38.271 CC module/sock/posix/posix.o 00:01:38.271 CC module/accel/dsa/accel_dsa.o 00:01:38.271 CC module/accel/dsa/accel_dsa_rpc.o 00:01:38.271 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:38.271 CC module/keyring/file/keyring.o 00:01:38.271 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:38.271 CC module/keyring/file/keyring_rpc.o 00:01:38.271 CC module/accel/iaa/accel_iaa.o 00:01:38.272 CC module/accel/iaa/accel_iaa_rpc.o 00:01:38.272 LIB libspdk_env_dpdk_rpc.a 00:01:38.272 SO libspdk_env_dpdk_rpc.so.6.0 00:01:38.272 SYMLINK libspdk_env_dpdk_rpc.so 00:01:38.272 LIB libspdk_keyring_file.a 00:01:38.272 LIB libspdk_scheduler_gscheduler.a 00:01:38.272 LIB libspdk_scheduler_dpdk_governor.a 00:01:38.272 SO libspdk_scheduler_gscheduler.so.4.0 00:01:38.272 SO libspdk_keyring_file.so.1.0 00:01:38.272 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:38.272 LIB libspdk_accel_error.a 00:01:38.272 LIB libspdk_accel_ioat.a 00:01:38.272 LIB libspdk_scheduler_dynamic.a 00:01:38.272 LIB libspdk_accel_iaa.a 00:01:38.529 SO libspdk_accel_error.so.2.0 00:01:38.529 SO libspdk_accel_ioat.so.6.0 00:01:38.529 SO libspdk_scheduler_dynamic.so.4.0 00:01:38.529 SYMLINK libspdk_scheduler_gscheduler.so 00:01:38.529 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:38.529 SYMLINK libspdk_keyring_file.so 00:01:38.529 SO libspdk_accel_iaa.so.3.0 00:01:38.529 LIB libspdk_accel_dsa.a 00:01:38.529 LIB libspdk_blob_bdev.a 00:01:38.529 SYMLINK libspdk_scheduler_dynamic.so 00:01:38.529 SO libspdk_accel_dsa.so.5.0 00:01:38.529 SYMLINK libspdk_accel_error.so 00:01:38.529 SYMLINK libspdk_accel_ioat.so 00:01:38.529 SO libspdk_blob_bdev.so.11.0 00:01:38.529 SYMLINK libspdk_accel_iaa.so 00:01:38.529 SYMLINK libspdk_accel_dsa.so 00:01:38.529 SYMLINK libspdk_blob_bdev.so 00:01:38.788 LIB libspdk_vfu_device.a 00:01:38.788 SO libspdk_vfu_device.so.3.0 00:01:38.788 CC module/blobfs/bdev/blobfs_bdev.o 00:01:38.788 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:38.788 CC module/bdev/null/bdev_null.o 00:01:38.788 CC module/bdev/nvme/bdev_nvme.o 00:01:38.788 CC module/bdev/malloc/bdev_malloc.o 00:01:38.788 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:38.788 CC module/bdev/lvol/vbdev_lvol.o 00:01:38.788 CC module/bdev/null/bdev_null_rpc.o 00:01:38.788 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:38.788 CC module/bdev/nvme/nvme_rpc.o 00:01:38.788 CC module/bdev/gpt/gpt.o 00:01:38.788 CC module/bdev/nvme/bdev_mdns_client.o 00:01:38.788 CC module/bdev/error/vbdev_error.o 00:01:38.788 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:38.788 CC module/bdev/error/vbdev_error_rpc.o 00:01:38.788 CC module/bdev/passthru/vbdev_passthru.o 00:01:38.788 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:38.788 CC module/bdev/aio/bdev_aio.o 00:01:38.788 CC module/bdev/aio/bdev_aio_rpc.o 00:01:38.788 CC module/bdev/nvme/vbdev_opal.o 00:01:38.788 CC module/bdev/gpt/vbdev_gpt.o 00:01:38.788 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:38.788 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:38.788 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:38.788 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:38.788 CC module/bdev/delay/vbdev_delay.o 00:01:38.788 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:38.788 CC module/bdev/split/vbdev_split.o 00:01:38.788 CC module/bdev/split/vbdev_split_rpc.o 00:01:38.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:38.788 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:38.788 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:38.788 CC module/bdev/raid/bdev_raid.o 00:01:38.788 CC module/bdev/raid/bdev_raid_rpc.o 00:01:38.788 CC module/bdev/ftl/bdev_ftl.o 00:01:38.788 CC module/bdev/raid/bdev_raid_sb.o 00:01:38.788 CC module/bdev/iscsi/bdev_iscsi.o 00:01:38.788 CC module/bdev/raid/raid0.o 00:01:38.788 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:38.788 CC module/bdev/raid/raid1.o 00:01:38.788 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:38.788 CC module/bdev/raid/concat.o 00:01:38.788 SYMLINK libspdk_vfu_device.so 00:01:39.046 LIB libspdk_sock_posix.a 00:01:39.046 SO libspdk_sock_posix.so.6.0 00:01:39.046 LIB libspdk_bdev_error.a 00:01:39.046 LIB libspdk_blobfs_bdev.a 00:01:39.046 SO libspdk_bdev_error.so.6.0 00:01:39.305 SO libspdk_blobfs_bdev.so.6.0 00:01:39.305 SYMLINK libspdk_sock_posix.so 00:01:39.305 LIB libspdk_bdev_split.a 00:01:39.305 LIB libspdk_bdev_null.a 00:01:39.305 LIB libspdk_bdev_zone_block.a 00:01:39.305 SYMLINK libspdk_blobfs_bdev.so 00:01:39.305 SYMLINK libspdk_bdev_error.so 00:01:39.305 SO libspdk_bdev_null.so.6.0 00:01:39.305 SO libspdk_bdev_split.so.6.0 00:01:39.305 LIB libspdk_bdev_gpt.a 00:01:39.305 SO libspdk_bdev_zone_block.so.6.0 00:01:39.305 LIB libspdk_bdev_passthru.a 00:01:39.305 SO libspdk_bdev_gpt.so.6.0 00:01:39.305 LIB libspdk_bdev_aio.a 00:01:39.305 SO libspdk_bdev_passthru.so.6.0 00:01:39.305 LIB libspdk_bdev_ftl.a 00:01:39.305 SYMLINK libspdk_bdev_split.so 00:01:39.305 SYMLINK libspdk_bdev_null.so 00:01:39.305 SYMLINK libspdk_bdev_zone_block.so 00:01:39.305 SO libspdk_bdev_aio.so.6.0 00:01:39.305 LIB libspdk_bdev_delay.a 00:01:39.305 SO libspdk_bdev_ftl.so.6.0 00:01:39.305 SYMLINK libspdk_bdev_gpt.so 00:01:39.305 SO libspdk_bdev_delay.so.6.0 00:01:39.305 SYMLINK libspdk_bdev_passthru.so 00:01:39.305 LIB libspdk_bdev_iscsi.a 00:01:39.305 SYMLINK libspdk_bdev_aio.so 00:01:39.305 SYMLINK libspdk_bdev_ftl.so 00:01:39.305 SO libspdk_bdev_iscsi.so.6.0 00:01:39.305 LIB libspdk_bdev_malloc.a 00:01:39.305 SYMLINK libspdk_bdev_delay.so 00:01:39.564 SO libspdk_bdev_malloc.so.6.0 00:01:39.564 SYMLINK libspdk_bdev_iscsi.so 00:01:39.564 SYMLINK libspdk_bdev_malloc.so 00:01:39.564 LIB libspdk_bdev_lvol.a 00:01:39.564 SO libspdk_bdev_lvol.so.6.0 00:01:39.564 LIB libspdk_bdev_virtio.a 00:01:39.564 SO libspdk_bdev_virtio.so.6.0 00:01:39.564 SYMLINK libspdk_bdev_lvol.so 00:01:39.564 SYMLINK libspdk_bdev_virtio.so 00:01:39.822 LIB libspdk_bdev_raid.a 00:01:40.080 SO libspdk_bdev_raid.so.6.0 00:01:40.080 SYMLINK libspdk_bdev_raid.so 00:01:41.017 LIB libspdk_bdev_nvme.a 00:01:41.017 SO libspdk_bdev_nvme.so.7.0 00:01:41.319 SYMLINK libspdk_bdev_nvme.so 00:01:41.600 CC module/event/subsystems/sock/sock.o 00:01:41.600 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:41.600 CC module/event/subsystems/vmd/vmd.o 00:01:41.600 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:41.600 CC module/event/subsystems/scheduler/scheduler.o 00:01:41.600 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:41.600 CC module/event/subsystems/keyring/keyring.o 00:01:41.600 CC module/event/subsystems/iobuf/iobuf.o 00:01:41.600 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:41.600 LIB libspdk_event_sock.a 00:01:41.600 LIB libspdk_event_keyring.a 00:01:41.600 LIB libspdk_event_vhost_blk.a 00:01:41.600 LIB libspdk_event_vfu_tgt.a 00:01:41.600 LIB libspdk_event_scheduler.a 00:01:41.858 LIB libspdk_event_vmd.a 00:01:41.858 SO libspdk_event_keyring.so.1.0 00:01:41.859 SO libspdk_event_sock.so.5.0 00:01:41.859 SO libspdk_event_vhost_blk.so.3.0 00:01:41.859 SO libspdk_event_vfu_tgt.so.3.0 00:01:41.859 LIB libspdk_event_iobuf.a 00:01:41.859 SO libspdk_event_scheduler.so.4.0 00:01:41.859 SO libspdk_event_vmd.so.6.0 00:01:41.859 SO libspdk_event_iobuf.so.3.0 00:01:41.859 SYMLINK libspdk_event_sock.so 00:01:41.859 SYMLINK libspdk_event_vhost_blk.so 00:01:41.859 SYMLINK libspdk_event_vfu_tgt.so 00:01:41.859 SYMLINK libspdk_event_keyring.so 00:01:41.859 SYMLINK libspdk_event_scheduler.so 00:01:41.859 SYMLINK libspdk_event_vmd.so 00:01:41.859 SYMLINK libspdk_event_iobuf.so 00:01:42.118 CC module/event/subsystems/accel/accel.o 00:01:42.118 LIB libspdk_event_accel.a 00:01:42.118 SO libspdk_event_accel.so.6.0 00:01:42.118 SYMLINK libspdk_event_accel.so 00:01:42.377 CC module/event/subsystems/bdev/bdev.o 00:01:42.635 LIB libspdk_event_bdev.a 00:01:42.635 SO libspdk_event_bdev.so.6.0 00:01:42.635 SYMLINK libspdk_event_bdev.so 00:01:42.894 CC module/event/subsystems/scsi/scsi.o 00:01:42.894 CC module/event/subsystems/nbd/nbd.o 00:01:42.894 CC module/event/subsystems/ublk/ublk.o 00:01:42.894 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:42.894 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:42.894 LIB libspdk_event_ublk.a 00:01:42.894 LIB libspdk_event_nbd.a 00:01:42.894 LIB libspdk_event_scsi.a 00:01:42.894 SO libspdk_event_ublk.so.3.0 00:01:42.894 SO libspdk_event_nbd.so.6.0 00:01:42.894 SO libspdk_event_scsi.so.6.0 00:01:43.152 SYMLINK libspdk_event_nbd.so 00:01:43.152 SYMLINK libspdk_event_ublk.so 00:01:43.152 SYMLINK libspdk_event_scsi.so 00:01:43.152 LIB libspdk_event_nvmf.a 00:01:43.152 SO libspdk_event_nvmf.so.6.0 00:01:43.152 SYMLINK libspdk_event_nvmf.so 00:01:43.152 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:43.152 CC module/event/subsystems/iscsi/iscsi.o 00:01:43.411 LIB libspdk_event_vhost_scsi.a 00:01:43.411 SO libspdk_event_vhost_scsi.so.3.0 00:01:43.411 LIB libspdk_event_iscsi.a 00:01:43.411 SO libspdk_event_iscsi.so.6.0 00:01:43.411 SYMLINK libspdk_event_vhost_scsi.so 00:01:43.411 SYMLINK libspdk_event_iscsi.so 00:01:43.673 SO libspdk.so.6.0 00:01:43.673 SYMLINK libspdk.so 00:01:43.673 CC app/spdk_lspci/spdk_lspci.o 00:01:43.673 CXX app/trace/trace.o 00:01:43.673 CC app/spdk_nvme_discover/discovery_aer.o 00:01:43.673 CC app/spdk_top/spdk_top.o 00:01:43.673 CC app/spdk_nvme_perf/perf.o 00:01:43.673 TEST_HEADER include/spdk/accel.h 00:01:43.673 CC app/trace_record/trace_record.o 00:01:43.673 CC test/rpc_client/rpc_client_test.o 00:01:43.673 TEST_HEADER include/spdk/accel_module.h 00:01:43.673 CC app/spdk_nvme_identify/identify.o 00:01:43.673 TEST_HEADER include/spdk/assert.h 00:01:43.673 TEST_HEADER include/spdk/barrier.h 00:01:43.941 TEST_HEADER include/spdk/base64.h 00:01:43.941 TEST_HEADER include/spdk/bdev.h 00:01:43.941 TEST_HEADER include/spdk/bdev_module.h 00:01:43.941 TEST_HEADER include/spdk/bdev_zone.h 00:01:43.941 TEST_HEADER include/spdk/bit_array.h 00:01:43.941 TEST_HEADER include/spdk/bit_pool.h 00:01:43.941 TEST_HEADER include/spdk/blob_bdev.h 00:01:43.941 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:43.941 TEST_HEADER include/spdk/blobfs.h 00:01:43.941 TEST_HEADER include/spdk/blob.h 00:01:43.941 TEST_HEADER include/spdk/conf.h 00:01:43.941 TEST_HEADER include/spdk/config.h 00:01:43.941 TEST_HEADER include/spdk/cpuset.h 00:01:43.941 TEST_HEADER include/spdk/crc16.h 00:01:43.941 TEST_HEADER include/spdk/crc32.h 00:01:43.941 CC app/spdk_dd/spdk_dd.o 00:01:43.941 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:43.941 TEST_HEADER include/spdk/crc64.h 00:01:43.941 TEST_HEADER include/spdk/dif.h 00:01:43.941 CC app/iscsi_tgt/iscsi_tgt.o 00:01:43.941 TEST_HEADER include/spdk/dma.h 00:01:43.941 TEST_HEADER include/spdk/endian.h 00:01:43.941 CC app/nvmf_tgt/nvmf_main.o 00:01:43.941 CC app/vhost/vhost.o 00:01:43.941 TEST_HEADER include/spdk/env_dpdk.h 00:01:43.941 TEST_HEADER include/spdk/env.h 00:01:43.941 TEST_HEADER include/spdk/event.h 00:01:43.941 TEST_HEADER include/spdk/fd_group.h 00:01:43.941 TEST_HEADER include/spdk/fd.h 00:01:43.941 TEST_HEADER include/spdk/file.h 00:01:43.941 TEST_HEADER include/spdk/ftl.h 00:01:43.941 TEST_HEADER include/spdk/gpt_spec.h 00:01:43.941 TEST_HEADER include/spdk/hexlify.h 00:01:43.941 CC app/spdk_tgt/spdk_tgt.o 00:01:43.941 TEST_HEADER include/spdk/histogram_data.h 00:01:43.941 TEST_HEADER include/spdk/idxd.h 00:01:43.941 TEST_HEADER include/spdk/idxd_spec.h 00:01:43.941 TEST_HEADER include/spdk/init.h 00:01:43.941 TEST_HEADER include/spdk/ioat.h 00:01:43.941 CC examples/vmd/lsvmd/lsvmd.o 00:01:43.941 CC examples/vmd/led/led.o 00:01:43.941 TEST_HEADER include/spdk/ioat_spec.h 00:01:43.941 TEST_HEADER include/spdk/iscsi_spec.h 00:01:43.941 CC examples/util/zipf/zipf.o 00:01:43.941 CC test/nvme/aer/aer.o 00:01:43.941 CC app/fio/nvme/fio_plugin.o 00:01:43.941 CC test/nvme/startup/startup.o 00:01:43.941 CC test/nvme/reset/reset.o 00:01:43.941 TEST_HEADER include/spdk/json.h 00:01:43.941 CC examples/accel/perf/accel_perf.o 00:01:43.941 CC examples/ioat/verify/verify.o 00:01:43.941 TEST_HEADER include/spdk/jsonrpc.h 00:01:43.941 CC test/thread/poller_perf/poller_perf.o 00:01:43.941 CC examples/ioat/perf/perf.o 00:01:43.941 CC examples/sock/hello_world/hello_sock.o 00:01:43.941 TEST_HEADER include/spdk/keyring.h 00:01:43.941 CC test/nvme/overhead/overhead.o 00:01:43.941 CC test/nvme/sgl/sgl.o 00:01:43.941 CC test/app/histogram_perf/histogram_perf.o 00:01:43.941 TEST_HEADER include/spdk/keyring_module.h 00:01:43.941 TEST_HEADER include/spdk/likely.h 00:01:43.941 CC examples/nvme/hello_world/hello_world.o 00:01:43.941 CC test/nvme/err_injection/err_injection.o 00:01:43.941 CC test/nvme/e2edp/nvme_dp.o 00:01:43.941 CC test/event/event_perf/event_perf.o 00:01:43.941 TEST_HEADER include/spdk/log.h 00:01:43.941 CC examples/idxd/perf/perf.o 00:01:43.941 TEST_HEADER include/spdk/lvol.h 00:01:43.941 TEST_HEADER include/spdk/memory.h 00:01:43.941 TEST_HEADER include/spdk/mmio.h 00:01:43.941 TEST_HEADER include/spdk/nbd.h 00:01:43.941 TEST_HEADER include/spdk/notify.h 00:01:43.941 TEST_HEADER include/spdk/nvme.h 00:01:43.941 TEST_HEADER include/spdk/nvme_intel.h 00:01:43.941 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:43.941 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:43.941 TEST_HEADER include/spdk/nvme_spec.h 00:01:43.941 TEST_HEADER include/spdk/nvme_zns.h 00:01:43.941 CC test/bdev/bdevio/bdevio.o 00:01:43.941 CC test/dma/test_dma/test_dma.o 00:01:43.941 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:43.941 CC examples/bdev/hello_world/hello_bdev.o 00:01:43.941 CC examples/thread/thread/thread_ex.o 00:01:43.941 CC test/accel/dif/dif.o 00:01:43.941 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:43.941 CC app/fio/bdev/fio_plugin.o 00:01:43.941 TEST_HEADER include/spdk/nvmf.h 00:01:43.941 TEST_HEADER include/spdk/nvmf_spec.h 00:01:43.941 CC examples/blob/hello_world/hello_blob.o 00:01:43.941 CC test/app/bdev_svc/bdev_svc.o 00:01:43.941 TEST_HEADER include/spdk/nvmf_transport.h 00:01:43.941 TEST_HEADER include/spdk/opal.h 00:01:43.941 CC test/blobfs/mkfs/mkfs.o 00:01:43.941 TEST_HEADER include/spdk/opal_spec.h 00:01:43.941 CC examples/nvmf/nvmf/nvmf.o 00:01:43.941 TEST_HEADER include/spdk/pci_ids.h 00:01:43.941 TEST_HEADER include/spdk/pipe.h 00:01:43.941 TEST_HEADER include/spdk/queue.h 00:01:43.941 TEST_HEADER include/spdk/reduce.h 00:01:44.206 TEST_HEADER include/spdk/rpc.h 00:01:44.206 TEST_HEADER include/spdk/scheduler.h 00:01:44.206 TEST_HEADER include/spdk/scsi.h 00:01:44.206 TEST_HEADER include/spdk/scsi_spec.h 00:01:44.206 CC test/env/mem_callbacks/mem_callbacks.o 00:01:44.206 TEST_HEADER include/spdk/sock.h 00:01:44.206 TEST_HEADER include/spdk/stdinc.h 00:01:44.206 LINK spdk_lspci 00:01:44.206 TEST_HEADER include/spdk/string.h 00:01:44.206 TEST_HEADER include/spdk/thread.h 00:01:44.206 TEST_HEADER include/spdk/trace.h 00:01:44.206 TEST_HEADER include/spdk/trace_parser.h 00:01:44.206 TEST_HEADER include/spdk/tree.h 00:01:44.206 CC test/lvol/esnap/esnap.o 00:01:44.206 TEST_HEADER include/spdk/ublk.h 00:01:44.206 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:44.206 TEST_HEADER include/spdk/util.h 00:01:44.206 TEST_HEADER include/spdk/uuid.h 00:01:44.206 TEST_HEADER include/spdk/version.h 00:01:44.206 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:44.206 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:44.206 TEST_HEADER include/spdk/vhost.h 00:01:44.206 TEST_HEADER include/spdk/vmd.h 00:01:44.206 TEST_HEADER include/spdk/xor.h 00:01:44.206 TEST_HEADER include/spdk/zipf.h 00:01:44.206 LINK rpc_client_test 00:01:44.206 CXX test/cpp_headers/accel.o 00:01:44.206 LINK spdk_nvme_discover 00:01:44.206 LINK lsvmd 00:01:44.206 LINK led 00:01:44.206 LINK interrupt_tgt 00:01:44.206 LINK zipf 00:01:44.206 LINK poller_perf 00:01:44.206 LINK histogram_perf 00:01:44.206 LINK nvmf_tgt 00:01:44.206 LINK vhost 00:01:44.206 LINK event_perf 00:01:44.206 LINK iscsi_tgt 00:01:44.206 LINK startup 00:01:44.206 LINK spdk_trace_record 00:01:44.469 LINK spdk_tgt 00:01:44.469 LINK err_injection 00:01:44.469 LINK ioat_perf 00:01:44.469 LINK verify 00:01:44.469 LINK bdev_svc 00:01:44.469 LINK hello_sock 00:01:44.469 LINK hello_world 00:01:44.469 LINK mkfs 00:01:44.469 LINK sgl 00:01:44.469 LINK reset 00:01:44.469 LINK aer 00:01:44.469 LINK thread 00:01:44.469 LINK nvme_dp 00:01:44.469 CXX test/cpp_headers/accel_module.o 00:01:44.469 LINK hello_blob 00:01:44.469 LINK hello_bdev 00:01:44.469 LINK overhead 00:01:44.469 CC test/env/vtophys/vtophys.o 00:01:44.731 LINK spdk_dd 00:01:44.731 CC test/app/jsoncat/jsoncat.o 00:01:44.731 LINK idxd_perf 00:01:44.731 LINK nvmf 00:01:44.731 LINK spdk_trace 00:01:44.731 CC test/event/reactor/reactor.o 00:01:44.731 CC test/event/reactor_perf/reactor_perf.o 00:01:44.731 CXX test/cpp_headers/assert.o 00:01:44.731 CXX test/cpp_headers/barrier.o 00:01:44.731 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:44.731 LINK dif 00:01:44.731 CC examples/nvme/reconnect/reconnect.o 00:01:44.731 CC test/nvme/reserve/reserve.o 00:01:44.731 LINK bdevio 00:01:44.731 CC test/app/stub/stub.o 00:01:44.731 CC test/env/memory/memory_ut.o 00:01:44.731 LINK test_dma 00:01:44.731 CC examples/bdev/bdevperf/bdevperf.o 00:01:44.731 LINK accel_perf 00:01:44.731 CC test/nvme/simple_copy/simple_copy.o 00:01:44.731 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:44.731 CC examples/blob/cli/blobcli.o 00:01:44.997 CXX test/cpp_headers/base64.o 00:01:44.997 CC test/env/pci/pci_ut.o 00:01:44.997 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:44.997 CC test/event/app_repeat/app_repeat.o 00:01:44.997 CC test/nvme/connect_stress/connect_stress.o 00:01:44.997 LINK jsoncat 00:01:44.997 LINK nvme_fuzz 00:01:44.997 LINK vtophys 00:01:44.997 CXX test/cpp_headers/bdev.o 00:01:44.997 CXX test/cpp_headers/bdev_module.o 00:01:44.997 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:44.997 LINK spdk_bdev 00:01:44.997 LINK spdk_nvme 00:01:44.997 CC test/nvme/boot_partition/boot_partition.o 00:01:44.997 CC examples/nvme/arbitration/arbitration.o 00:01:44.997 LINK reactor 00:01:44.997 CC test/nvme/compliance/nvme_compliance.o 00:01:44.997 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:44.997 LINK reactor_perf 00:01:44.997 CC test/event/scheduler/scheduler.o 00:01:44.997 CC examples/nvme/hotplug/hotplug.o 00:01:44.997 CC examples/nvme/abort/abort.o 00:01:44.997 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:44.997 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:44.997 CXX test/cpp_headers/bdev_zone.o 00:01:44.997 CC test/nvme/fused_ordering/fused_ordering.o 00:01:44.997 LINK env_dpdk_post_init 00:01:44.997 CXX test/cpp_headers/bit_array.o 00:01:44.997 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:45.257 LINK stub 00:01:45.257 CXX test/cpp_headers/bit_pool.o 00:01:45.257 CXX test/cpp_headers/blob_bdev.o 00:01:45.257 CXX test/cpp_headers/blobfs_bdev.o 00:01:45.257 LINK reserve 00:01:45.257 CXX test/cpp_headers/blobfs.o 00:01:45.257 CXX test/cpp_headers/blob.o 00:01:45.257 LINK app_repeat 00:01:45.257 CC test/nvme/fdp/fdp.o 00:01:45.257 CXX test/cpp_headers/conf.o 00:01:45.257 CXX test/cpp_headers/config.o 00:01:45.257 CXX test/cpp_headers/cpuset.o 00:01:45.257 CC test/nvme/cuse/cuse.o 00:01:45.257 CXX test/cpp_headers/crc16.o 00:01:45.257 CXX test/cpp_headers/crc32.o 00:01:45.257 LINK simple_copy 00:01:45.257 CXX test/cpp_headers/crc64.o 00:01:45.257 CXX test/cpp_headers/dif.o 00:01:45.257 LINK connect_stress 00:01:45.257 CXX test/cpp_headers/dma.o 00:01:45.257 CXX test/cpp_headers/endian.o 00:01:45.257 LINK mem_callbacks 00:01:45.257 LINK boot_partition 00:01:45.257 LINK spdk_nvme_perf 00:01:45.257 CXX test/cpp_headers/env_dpdk.o 00:01:45.517 CXX test/cpp_headers/env.o 00:01:45.517 LINK spdk_nvme_identify 00:01:45.517 CXX test/cpp_headers/event.o 00:01:45.517 LINK cmb_copy 00:01:45.517 LINK reconnect 00:01:45.517 LINK pmr_persistence 00:01:45.517 CXX test/cpp_headers/fd_group.o 00:01:45.517 CXX test/cpp_headers/fd.o 00:01:45.517 LINK scheduler 00:01:45.517 LINK doorbell_aers 00:01:45.517 CXX test/cpp_headers/file.o 00:01:45.517 CXX test/cpp_headers/ftl.o 00:01:45.517 LINK hotplug 00:01:45.517 LINK spdk_top 00:01:45.517 LINK fused_ordering 00:01:45.517 CXX test/cpp_headers/gpt_spec.o 00:01:45.517 CXX test/cpp_headers/hexlify.o 00:01:45.517 CXX test/cpp_headers/idxd.o 00:01:45.517 CXX test/cpp_headers/histogram_data.o 00:01:45.517 CXX test/cpp_headers/idxd_spec.o 00:01:45.517 CXX test/cpp_headers/init.o 00:01:45.517 CXX test/cpp_headers/ioat.o 00:01:45.517 CXX test/cpp_headers/ioat_spec.o 00:01:45.517 CXX test/cpp_headers/iscsi_spec.o 00:01:45.517 LINK pci_ut 00:01:45.517 CXX test/cpp_headers/json.o 00:01:45.517 CXX test/cpp_headers/jsonrpc.o 00:01:45.517 CXX test/cpp_headers/keyring.o 00:01:45.517 CXX test/cpp_headers/keyring_module.o 00:01:45.517 CXX test/cpp_headers/likely.o 00:01:45.517 CXX test/cpp_headers/log.o 00:01:45.517 LINK nvme_compliance 00:01:45.784 CXX test/cpp_headers/lvol.o 00:01:45.784 LINK arbitration 00:01:45.784 CXX test/cpp_headers/memory.o 00:01:45.784 CXX test/cpp_headers/mmio.o 00:01:45.784 CXX test/cpp_headers/nbd.o 00:01:45.784 CXX test/cpp_headers/notify.o 00:01:45.784 CXX test/cpp_headers/nvme.o 00:01:45.784 CXX test/cpp_headers/nvme_intel.o 00:01:45.784 CXX test/cpp_headers/nvme_ocssd.o 00:01:45.784 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:45.784 CXX test/cpp_headers/nvme_spec.o 00:01:45.784 CXX test/cpp_headers/nvme_zns.o 00:01:45.784 LINK abort 00:01:45.784 CXX test/cpp_headers/nvmf_cmd.o 00:01:45.784 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:45.784 CXX test/cpp_headers/nvmf.o 00:01:45.784 CXX test/cpp_headers/nvmf_transport.o 00:01:45.784 CXX test/cpp_headers/nvmf_spec.o 00:01:45.784 CXX test/cpp_headers/opal.o 00:01:45.784 CXX test/cpp_headers/opal_spec.o 00:01:45.784 CXX test/cpp_headers/pci_ids.o 00:01:45.784 LINK blobcli 00:01:45.784 LINK nvme_manage 00:01:45.784 CXX test/cpp_headers/pipe.o 00:01:45.784 LINK vhost_fuzz 00:01:45.784 LINK fdp 00:01:45.784 CXX test/cpp_headers/queue.o 00:01:46.043 CXX test/cpp_headers/reduce.o 00:01:46.043 CXX test/cpp_headers/rpc.o 00:01:46.043 CXX test/cpp_headers/scheduler.o 00:01:46.043 CXX test/cpp_headers/scsi.o 00:01:46.043 CXX test/cpp_headers/scsi_spec.o 00:01:46.043 CXX test/cpp_headers/sock.o 00:01:46.043 CXX test/cpp_headers/stdinc.o 00:01:46.043 CXX test/cpp_headers/string.o 00:01:46.043 CXX test/cpp_headers/thread.o 00:01:46.043 CXX test/cpp_headers/trace.o 00:01:46.043 CXX test/cpp_headers/trace_parser.o 00:01:46.043 CXX test/cpp_headers/tree.o 00:01:46.043 CXX test/cpp_headers/ublk.o 00:01:46.043 CXX test/cpp_headers/util.o 00:01:46.043 CXX test/cpp_headers/uuid.o 00:01:46.043 CXX test/cpp_headers/version.o 00:01:46.043 CXX test/cpp_headers/vfio_user_pci.o 00:01:46.043 CXX test/cpp_headers/vfio_user_spec.o 00:01:46.043 CXX test/cpp_headers/vhost.o 00:01:46.043 CXX test/cpp_headers/vmd.o 00:01:46.043 CXX test/cpp_headers/xor.o 00:01:46.043 CXX test/cpp_headers/zipf.o 00:01:46.302 LINK bdevperf 00:01:46.302 LINK memory_ut 00:01:46.868 LINK cuse 00:01:47.125 LINK iscsi_fuzz 00:01:49.657 LINK esnap 00:01:49.916 00:01:49.916 real 0m47.917s 00:01:49.916 user 10m4.314s 00:01:49.916 sys 2m25.787s 00:01:49.916 02:18:37 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:49.916 02:18:37 make -- common/autotest_common.sh@10 -- $ set +x 00:01:49.916 ************************************ 00:01:49.916 END TEST make 00:01:49.916 ************************************ 00:01:49.916 02:18:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:49.916 02:18:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:49.916 02:18:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:49.916 02:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.916 02:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:49.916 02:18:37 -- pm/common@44 -- $ pid=2088884 00:01:49.916 02:18:37 -- pm/common@50 -- $ kill -TERM 2088884 00:01:49.916 02:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.916 02:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:49.916 02:18:37 -- pm/common@44 -- $ pid=2088886 00:01:49.916 02:18:37 -- pm/common@50 -- $ kill -TERM 2088886 00:01:49.916 02:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.916 02:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:49.916 02:18:37 -- pm/common@44 -- $ pid=2088888 00:01:49.916 02:18:37 -- pm/common@50 -- $ kill -TERM 2088888 00:01:49.916 02:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.916 02:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:49.916 02:18:37 -- pm/common@44 -- $ pid=2088924 00:01:49.916 02:18:37 -- pm/common@50 -- $ sudo -E kill -TERM 2088924 00:01:49.916 02:18:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:49.916 02:18:37 -- nvmf/common.sh@7 -- # uname -s 00:01:49.916 02:18:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:49.916 02:18:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:49.916 02:18:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:49.916 02:18:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:49.916 02:18:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:49.916 02:18:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:49.916 02:18:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:49.916 02:18:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:49.916 02:18:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:49.916 02:18:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:49.916 02:18:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:49.916 02:18:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:49.916 02:18:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:49.916 02:18:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:49.916 02:18:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:49.916 02:18:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:49.916 02:18:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.916 02:18:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:49.916 02:18:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.916 02:18:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.916 02:18:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.916 02:18:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.916 02:18:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.916 02:18:37 -- paths/export.sh@5 -- # export PATH 00:01:49.917 02:18:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.917 02:18:37 -- nvmf/common.sh@47 -- # : 0 00:01:49.917 02:18:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:49.917 02:18:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:49.917 02:18:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:49.917 02:18:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:49.917 02:18:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:49.917 02:18:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:49.917 02:18:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:49.917 02:18:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:49.917 02:18:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:49.917 02:18:37 -- spdk/autotest.sh@32 -- # uname -s 00:01:49.917 02:18:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:49.917 02:18:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:49.917 02:18:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.917 02:18:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:49.917 02:18:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.917 02:18:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:50.176 02:18:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:50.176 02:18:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:50.176 02:18:37 -- spdk/autotest.sh@48 -- # udevadm_pid=2144134 00:01:50.176 02:18:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:50.176 02:18:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:50.176 02:18:37 -- pm/common@17 -- # local monitor 00:01:50.176 02:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.176 02:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.176 02:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.176 02:18:37 -- pm/common@21 -- # date +%s 00:01:50.176 02:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.176 02:18:37 -- pm/common@21 -- # date +%s 00:01:50.176 02:18:37 -- pm/common@25 -- # sleep 1 00:01:50.176 02:18:37 -- pm/common@21 -- # date +%s 00:01:50.176 02:18:37 -- pm/common@21 -- # date +%s 00:01:50.176 02:18:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732317 00:01:50.176 02:18:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732317 00:01:50.176 02:18:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732317 00:01:50.176 02:18:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732317 00:01:50.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732317_collect-vmstat.pm.log 00:01:50.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732317_collect-cpu-load.pm.log 00:01:50.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732317_collect-cpu-temp.pm.log 00:01:50.176 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732317_collect-bmc-pm.bmc.pm.log 00:01:51.110 02:18:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:51.110 02:18:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:51.110 02:18:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:51.110 02:18:38 -- common/autotest_common.sh@10 -- # set +x 00:01:51.110 02:18:38 -- spdk/autotest.sh@59 -- # create_test_list 00:01:51.110 02:18:38 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:51.110 02:18:38 -- common/autotest_common.sh@10 -- # set +x 00:01:51.110 02:18:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:51.110 02:18:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.110 02:18:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.110 02:18:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:51.110 02:18:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.110 02:18:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:51.110 02:18:38 -- common/autotest_common.sh@1451 -- # uname 00:01:51.110 02:18:38 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:51.110 02:18:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:51.110 02:18:38 -- common/autotest_common.sh@1471 -- # uname 00:01:51.110 02:18:38 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:51.110 02:18:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:51.110 02:18:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:51.110 02:18:38 -- spdk/autotest.sh@72 -- # hash lcov 00:01:51.110 02:18:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:51.110 02:18:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:51.110 --rc lcov_branch_coverage=1 00:01:51.110 --rc lcov_function_coverage=1 00:01:51.110 --rc genhtml_branch_coverage=1 00:01:51.110 --rc genhtml_function_coverage=1 00:01:51.110 --rc genhtml_legend=1 00:01:51.110 --rc geninfo_all_blocks=1 00:01:51.110 ' 00:01:51.110 02:18:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:51.110 --rc lcov_branch_coverage=1 00:01:51.110 --rc lcov_function_coverage=1 00:01:51.110 --rc genhtml_branch_coverage=1 00:01:51.110 --rc genhtml_function_coverage=1 00:01:51.110 --rc genhtml_legend=1 00:01:51.110 --rc geninfo_all_blocks=1 00:01:51.110 ' 00:01:51.110 02:18:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:51.110 --rc lcov_branch_coverage=1 00:01:51.110 --rc lcov_function_coverage=1 00:01:51.110 --rc genhtml_branch_coverage=1 00:01:51.110 --rc genhtml_function_coverage=1 00:01:51.110 --rc genhtml_legend=1 00:01:51.110 --rc geninfo_all_blocks=1 00:01:51.110 --no-external' 00:01:51.110 02:18:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:51.110 --rc lcov_branch_coverage=1 00:01:51.110 --rc lcov_function_coverage=1 00:01:51.110 --rc genhtml_branch_coverage=1 00:01:51.110 --rc genhtml_function_coverage=1 00:01:51.110 --rc genhtml_legend=1 00:01:51.110 --rc geninfo_all_blocks=1 00:01:51.110 --no-external' 00:01:51.110 02:18:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:51.110 lcov: LCOV version 1.14 00:01:51.110 02:18:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:03.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:03.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:05.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:05.200 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:05.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:05.200 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:05.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:05.200 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:23.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:23.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:23.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:23.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:23.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:23.871 02:19:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:23.871 02:19:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:23.871 02:19:11 -- common/autotest_common.sh@10 -- # set +x 00:02:23.871 02:19:11 -- spdk/autotest.sh@91 -- # rm -f 00:02:23.871 02:19:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.244 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:25.244 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:25.244 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:25.244 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:25.244 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:25.244 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:25.244 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:25.244 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:25.244 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:25.244 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:25.244 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:25.244 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:25.244 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:25.244 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:25.245 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:25.245 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:25.245 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:25.245 02:19:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:25.245 02:19:12 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:25.245 02:19:12 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:25.245 02:19:12 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:25.245 02:19:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:25.245 02:19:12 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:25.245 02:19:12 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:25.245 02:19:12 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:25.245 02:19:12 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:25.245 02:19:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:25.245 02:19:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:25.245 02:19:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:25.245 02:19:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:25.245 02:19:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:25.245 02:19:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:25.503 No valid GPT data, bailing 00:02:25.503 02:19:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:25.503 02:19:12 -- scripts/common.sh@391 -- # pt= 00:02:25.503 02:19:12 -- scripts/common.sh@392 -- # return 1 00:02:25.503 02:19:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:25.503 1+0 records in 00:02:25.503 1+0 records out 00:02:25.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00201078 s, 521 MB/s 00:02:25.503 02:19:12 -- spdk/autotest.sh@118 -- # sync 00:02:25.503 02:19:12 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:25.503 02:19:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:25.503 02:19:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:27.402 02:19:14 -- spdk/autotest.sh@124 -- # uname -s 00:02:27.402 02:19:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:27.402 02:19:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.402 02:19:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:27.402 02:19:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:27.402 02:19:14 -- common/autotest_common.sh@10 -- # set +x 00:02:27.402 ************************************ 00:02:27.402 START TEST setup.sh 00:02:27.402 ************************************ 00:02:27.402 02:19:14 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.402 * Looking for test storage... 00:02:27.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:27.402 02:19:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:27.402 02:19:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:27.402 02:19:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:27.402 02:19:14 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:27.402 02:19:14 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:27.402 02:19:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:27.402 ************************************ 00:02:27.402 START TEST acl 00:02:27.402 ************************************ 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:27.402 * Looking for test storage... 00:02:27.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:27.402 02:19:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.402 02:19:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:27.402 02:19:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:27.402 02:19:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:27.402 02:19:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:27.402 02:19:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:27.402 02:19:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:27.402 02:19:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:27.402 02:19:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.303 02:19:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:29.304 02:19:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:29.304 02:19:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.304 02:19:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:29.304 02:19:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.304 02:19:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:30.678 Hugepages 00:02:30.678 node hugesize free / total 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 00:02:30.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.678 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:30.679 02:19:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:30.679 02:19:17 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:30.679 02:19:17 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:30.679 02:19:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.679 ************************************ 00:02:30.679 START TEST denied 00:02:30.679 ************************************ 00:02:30.679 02:19:17 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:30.679 02:19:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:30.679 02:19:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:30.679 02:19:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:30.679 02:19:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.679 02:19:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:32.054 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:32.054 02:19:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.585 00:02:34.585 real 0m4.005s 00:02:34.585 user 0m1.242s 00:02:34.585 sys 0m1.960s 00:02:34.585 02:19:21 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:34.585 02:19:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:34.585 ************************************ 00:02:34.585 END TEST denied 00:02:34.585 ************************************ 00:02:34.585 02:19:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:34.585 02:19:21 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.585 02:19:21 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.585 02:19:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:34.585 ************************************ 00:02:34.585 START TEST allowed 00:02:34.585 ************************************ 00:02:34.585 02:19:21 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:34.585 02:19:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:34.585 02:19:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:34.585 02:19:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:34.585 02:19:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.585 02:19:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:37.115 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:37.115 02:19:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:37.115 02:19:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:37.115 02:19:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:37.115 02:19:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.115 02:19:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.066 00:02:39.066 real 0m4.125s 00:02:39.066 user 0m1.194s 00:02:39.066 sys 0m1.861s 00:02:39.066 02:19:26 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:39.066 02:19:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:39.066 ************************************ 00:02:39.066 END TEST allowed 00:02:39.066 ************************************ 00:02:39.066 00:02:39.066 real 0m11.357s 00:02:39.066 user 0m3.587s 00:02:39.066 sys 0m5.987s 00:02:39.066 02:19:26 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:39.066 02:19:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:39.066 ************************************ 00:02:39.066 END TEST acl 00:02:39.066 ************************************ 00:02:39.066 02:19:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:39.066 02:19:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:39.066 02:19:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:39.066 02:19:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:39.066 ************************************ 00:02:39.066 START TEST hugepages 00:02:39.066 ************************************ 00:02:39.066 02:19:26 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:39.066 * Looking for test storage... 00:02:39.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35489460 kB' 'MemAvailable: 40175776 kB' 'Buffers: 2696 kB' 'Cached: 18408648 kB' 'SwapCached: 0 kB' 'Active: 14399304 kB' 'Inactive: 4470784 kB' 'Active(anon): 13810144 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462008 kB' 'Mapped: 191044 kB' 'Shmem: 13351400 kB' 'KReclaimable: 239988 kB' 'Slab: 634268 kB' 'SReclaimable: 239988 kB' 'SUnreclaim: 394280 kB' 'KernelStack: 13072 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14943128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.066 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.067 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:39.068 02:19:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:39.068 02:19:26 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:39.068 02:19:26 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:39.068 02:19:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:39.068 ************************************ 00:02:39.068 START TEST default_setup 00:02:39.068 ************************************ 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.068 02:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.442 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:40.442 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:40.442 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:41.381 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.381 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37605444 kB' 'MemAvailable: 42291728 kB' 'Buffers: 2696 kB' 'Cached: 18408744 kB' 'SwapCached: 0 kB' 'Active: 14423140 kB' 'Inactive: 4470784 kB' 'Active(anon): 13833980 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485812 kB' 'Mapped: 191480 kB' 'Shmem: 13351496 kB' 'KReclaimable: 239924 kB' 'Slab: 633880 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393956 kB' 'KernelStack: 13264 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14970624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199132 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.382 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37610680 kB' 'MemAvailable: 42296964 kB' 'Buffers: 2696 kB' 'Cached: 18408748 kB' 'SwapCached: 0 kB' 'Active: 14424592 kB' 'Inactive: 4470784 kB' 'Active(anon): 13835432 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487268 kB' 'Mapped: 191932 kB' 'Shmem: 13351500 kB' 'KReclaimable: 239924 kB' 'Slab: 633880 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393956 kB' 'KernelStack: 13376 kB' 'PageTables: 10528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14971712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199248 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.383 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.384 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.385 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37609496 kB' 'MemAvailable: 42295780 kB' 'Buffers: 2696 kB' 'Cached: 18408764 kB' 'SwapCached: 0 kB' 'Active: 14419836 kB' 'Inactive: 4470784 kB' 'Active(anon): 13830676 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482272 kB' 'Mapped: 191496 kB' 'Shmem: 13351516 kB' 'KReclaimable: 239924 kB' 'Slab: 633880 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393956 kB' 'KernelStack: 13248 kB' 'PageTables: 10588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14965612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199388 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.647 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.648 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:41.649 nr_hugepages=1024 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.649 resv_hugepages=0 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.649 surplus_hugepages=0 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.649 anon_hugepages=0 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37609884 kB' 'MemAvailable: 42296168 kB' 'Buffers: 2696 kB' 'Cached: 18408784 kB' 'SwapCached: 0 kB' 'Active: 14418264 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829104 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480324 kB' 'Mapped: 191496 kB' 'Shmem: 13351536 kB' 'KReclaimable: 239924 kB' 'Slab: 633904 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393980 kB' 'KernelStack: 13280 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199164 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.649 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.650 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21270160 kB' 'MemUsed: 11559724 kB' 'SwapCached: 0 kB' 'Active: 8060864 kB' 'Inactive: 187176 kB' 'Active(anon): 7664708 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020824 kB' 'Mapped: 86872 kB' 'AnonPages: 230332 kB' 'Shmem: 7437492 kB' 'KernelStack: 7976 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332548 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.651 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:41.652 node0=1024 expecting 1024 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:41.652 00:02:41.652 real 0m2.656s 00:02:41.652 user 0m0.685s 00:02:41.652 sys 0m0.983s 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:41.652 02:19:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:41.652 ************************************ 00:02:41.653 END TEST default_setup 00:02:41.653 ************************************ 00:02:41.653 02:19:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:41.653 02:19:28 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.653 02:19:28 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.653 02:19:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.653 ************************************ 00:02:41.653 START TEST per_node_1G_alloc 00:02:41.653 ************************************ 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.653 02:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.028 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.028 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.028 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.028 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.028 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.028 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.028 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.028 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.028 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.028 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.028 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.028 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.028 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.028 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.028 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.028 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.028 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37593488 kB' 'MemAvailable: 42279772 kB' 'Buffers: 2696 kB' 'Cached: 18408864 kB' 'SwapCached: 0 kB' 'Active: 14419372 kB' 'Inactive: 4470784 kB' 'Active(anon): 13830212 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481928 kB' 'Mapped: 191124 kB' 'Shmem: 13351616 kB' 'KReclaimable: 239924 kB' 'Slab: 634124 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 394200 kB' 'KernelStack: 12960 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.028 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.029 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.292 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37596252 kB' 'MemAvailable: 42282536 kB' 'Buffers: 2696 kB' 'Cached: 18408868 kB' 'SwapCached: 0 kB' 'Active: 14419124 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829964 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481760 kB' 'Mapped: 191116 kB' 'Shmem: 13351620 kB' 'KReclaimable: 239924 kB' 'Slab: 634116 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 394192 kB' 'KernelStack: 13024 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.293 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.294 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37596976 kB' 'MemAvailable: 42283260 kB' 'Buffers: 2696 kB' 'Cached: 18408880 kB' 'SwapCached: 0 kB' 'Active: 14418512 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829352 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481060 kB' 'Mapped: 191040 kB' 'Shmem: 13351632 kB' 'KReclaimable: 239924 kB' 'Slab: 634084 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 394160 kB' 'KernelStack: 12992 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.295 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.296 nr_hugepages=1024 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.296 resv_hugepages=0 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.296 surplus_hugepages=0 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.296 anon_hugepages=0 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.296 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37597384 kB' 'MemAvailable: 42283668 kB' 'Buffers: 2696 kB' 'Cached: 18408908 kB' 'SwapCached: 0 kB' 'Active: 14418624 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829464 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481124 kB' 'Mapped: 191040 kB' 'Shmem: 13351660 kB' 'KReclaimable: 239924 kB' 'Slab: 634084 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 394160 kB' 'KernelStack: 13008 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.297 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.298 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22312828 kB' 'MemUsed: 10517056 kB' 'SwapCached: 0 kB' 'Active: 8060636 kB' 'Inactive: 187176 kB' 'Active(anon): 7664480 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020832 kB' 'Mapped: 86884 kB' 'AnonPages: 230136 kB' 'Shmem: 7437500 kB' 'KernelStack: 7960 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332624 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.299 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15285776 kB' 'MemUsed: 12426068 kB' 'SwapCached: 0 kB' 'Active: 6357944 kB' 'Inactive: 4283608 kB' 'Active(anon): 6164940 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10390816 kB' 'Mapped: 104156 kB' 'AnonPages: 250864 kB' 'Shmem: 5914204 kB' 'KernelStack: 5048 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124240 kB' 'Slab: 301460 kB' 'SReclaimable: 124240 kB' 'SUnreclaim: 177220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.300 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.301 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:43.302 node0=512 expecting 512 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:43.302 node1=512 expecting 512 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:43.302 00:02:43.302 real 0m1.654s 00:02:43.302 user 0m0.672s 00:02:43.302 sys 0m0.952s 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:43.302 02:19:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.302 ************************************ 00:02:43.302 END TEST per_node_1G_alloc 00:02:43.302 ************************************ 00:02:43.302 02:19:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:43.302 02:19:30 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:43.302 02:19:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:43.302 02:19:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.302 ************************************ 00:02:43.302 START TEST even_2G_alloc 00:02:43.302 ************************************ 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.302 02:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.680 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.680 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.680 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.680 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.680 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.680 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.680 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.680 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.680 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.680 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.680 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.680 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.680 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.680 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.680 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.680 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.680 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37594764 kB' 'MemAvailable: 42281048 kB' 'Buffers: 2696 kB' 'Cached: 18408996 kB' 'SwapCached: 0 kB' 'Active: 14419120 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829960 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481560 kB' 'Mapped: 191488 kB' 'Shmem: 13351748 kB' 'KReclaimable: 239924 kB' 'Slab: 633912 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393988 kB' 'KernelStack: 13040 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.680 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.681 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37595684 kB' 'MemAvailable: 42281968 kB' 'Buffers: 2696 kB' 'Cached: 18409000 kB' 'SwapCached: 0 kB' 'Active: 14418580 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829420 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481048 kB' 'Mapped: 191128 kB' 'Shmem: 13351752 kB' 'KReclaimable: 239924 kB' 'Slab: 633972 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 394048 kB' 'KernelStack: 13008 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.682 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.683 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37596364 kB' 'MemAvailable: 42282648 kB' 'Buffers: 2696 kB' 'Cached: 18409000 kB' 'SwapCached: 0 kB' 'Active: 14418460 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829300 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480836 kB' 'Mapped: 191052 kB' 'Shmem: 13351752 kB' 'KReclaimable: 239924 kB' 'Slab: 633916 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393992 kB' 'KernelStack: 13008 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14963916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.684 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.685 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.686 nr_hugepages=1024 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.686 resv_hugepages=0 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.686 surplus_hugepages=0 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.686 anon_hugepages=0 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.686 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37596116 kB' 'MemAvailable: 42282400 kB' 'Buffers: 2696 kB' 'Cached: 18409040 kB' 'SwapCached: 0 kB' 'Active: 14418624 kB' 'Inactive: 4470784 kB' 'Active(anon): 13829464 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480956 kB' 'Mapped: 191052 kB' 'Shmem: 13351792 kB' 'KReclaimable: 239924 kB' 'Slab: 633908 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393984 kB' 'KernelStack: 13024 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14964840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.687 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.949 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22318176 kB' 'MemUsed: 10511708 kB' 'SwapCached: 0 kB' 'Active: 8063464 kB' 'Inactive: 187176 kB' 'Active(anon): 7667308 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020836 kB' 'Mapped: 86896 kB' 'AnonPages: 232932 kB' 'Shmem: 7437504 kB' 'KernelStack: 8088 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332460 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.950 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.951 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15278624 kB' 'MemUsed: 12433220 kB' 'SwapCached: 0 kB' 'Active: 6357812 kB' 'Inactive: 4283608 kB' 'Active(anon): 6164808 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10390920 kB' 'Mapped: 104592 kB' 'AnonPages: 251044 kB' 'Shmem: 5914308 kB' 'KernelStack: 5128 kB' 'PageTables: 5164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124240 kB' 'Slab: 301448 kB' 'SReclaimable: 124240 kB' 'SUnreclaim: 177208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.952 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:44.953 node0=512 expecting 512 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:44.953 node1=512 expecting 512 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:44.953 00:02:44.953 real 0m1.530s 00:02:44.953 user 0m0.634s 00:02:44.953 sys 0m0.861s 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:44.953 02:19:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:44.953 ************************************ 00:02:44.953 END TEST even_2G_alloc 00:02:44.953 ************************************ 00:02:44.953 02:19:32 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:44.953 02:19:32 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:44.953 02:19:32 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:44.953 02:19:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.953 ************************************ 00:02:44.953 START TEST odd_alloc 00:02:44.953 ************************************ 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:44.953 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.954 02:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.329 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.329 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.329 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.329 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.329 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.329 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.329 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.329 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.329 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.329 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.329 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.329 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.329 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.329 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.329 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.329 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.329 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.329 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37589312 kB' 'MemAvailable: 42275596 kB' 'Buffers: 2696 kB' 'Cached: 18409132 kB' 'SwapCached: 0 kB' 'Active: 14410900 kB' 'Inactive: 4470784 kB' 'Active(anon): 13821740 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473016 kB' 'Mapped: 190224 kB' 'Shmem: 13351884 kB' 'KReclaimable: 239924 kB' 'Slab: 633840 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393916 kB' 'KernelStack: 12912 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14934716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198908 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.330 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37592184 kB' 'MemAvailable: 42278468 kB' 'Buffers: 2696 kB' 'Cached: 18409136 kB' 'SwapCached: 0 kB' 'Active: 14411576 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822416 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473740 kB' 'Mapped: 190224 kB' 'Shmem: 13351888 kB' 'KReclaimable: 239924 kB' 'Slab: 633832 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393908 kB' 'KernelStack: 12928 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14934732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.331 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.332 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.594 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37592680 kB' 'MemAvailable: 42278964 kB' 'Buffers: 2696 kB' 'Cached: 18409152 kB' 'SwapCached: 0 kB' 'Active: 14410748 kB' 'Inactive: 4470784 kB' 'Active(anon): 13821588 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472860 kB' 'Mapped: 190208 kB' 'Shmem: 13351904 kB' 'KReclaimable: 239924 kB' 'Slab: 633812 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393888 kB' 'KernelStack: 12880 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14934756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198828 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.595 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:46.596 nr_hugepages=1025 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.596 resv_hugepages=0 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.596 surplus_hugepages=0 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.596 anon_hugepages=0 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37592680 kB' 'MemAvailable: 42278964 kB' 'Buffers: 2696 kB' 'Cached: 18409172 kB' 'SwapCached: 0 kB' 'Active: 14410792 kB' 'Inactive: 4470784 kB' 'Active(anon): 13821632 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472896 kB' 'Mapped: 190208 kB' 'Shmem: 13351924 kB' 'KReclaimable: 239924 kB' 'Slab: 633812 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393888 kB' 'KernelStack: 12896 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14934776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.596 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22299888 kB' 'MemUsed: 10529996 kB' 'SwapCached: 0 kB' 'Active: 8057708 kB' 'Inactive: 187176 kB' 'Active(anon): 7661552 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020848 kB' 'Mapped: 86140 kB' 'AnonPages: 227160 kB' 'Shmem: 7437516 kB' 'KernelStack: 7992 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332468 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.597 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15291356 kB' 'MemUsed: 12420488 kB' 'SwapCached: 0 kB' 'Active: 6353080 kB' 'Inactive: 4283608 kB' 'Active(anon): 6160076 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10391060 kB' 'Mapped: 104068 kB' 'AnonPages: 245744 kB' 'Shmem: 5914448 kB' 'KernelStack: 4904 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124240 kB' 'Slab: 301344 kB' 'SReclaimable: 124240 kB' 'SUnreclaim: 177104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.598 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:46.599 node0=512 expecting 513 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:46.599 node1=513 expecting 512 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:46.599 00:02:46.599 real 0m1.653s 00:02:46.599 user 0m0.720s 00:02:46.599 sys 0m0.902s 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:46.599 02:19:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:46.599 ************************************ 00:02:46.599 END TEST odd_alloc 00:02:46.599 ************************************ 00:02:46.599 02:19:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:46.599 02:19:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:46.599 02:19:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:46.599 02:19:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.599 ************************************ 00:02:46.599 START TEST custom_alloc 00:02:46.599 ************************************ 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:46.599 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.600 02:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.007 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.007 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.007 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.007 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.007 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.007 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.007 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.007 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.007 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.007 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.007 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.007 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.007 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.007 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.007 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.007 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.007 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36557832 kB' 'MemAvailable: 41244116 kB' 'Buffers: 2696 kB' 'Cached: 18409264 kB' 'SwapCached: 0 kB' 'Active: 14412008 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822848 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474044 kB' 'Mapped: 190328 kB' 'Shmem: 13352016 kB' 'KReclaimable: 239924 kB' 'Slab: 633380 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393456 kB' 'KernelStack: 12912 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.007 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.008 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36557180 kB' 'MemAvailable: 41243464 kB' 'Buffers: 2696 kB' 'Cached: 18409268 kB' 'SwapCached: 0 kB' 'Active: 14411260 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822100 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473320 kB' 'Mapped: 190328 kB' 'Shmem: 13352020 kB' 'KReclaimable: 239924 kB' 'Slab: 633412 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393488 kB' 'KernelStack: 12912 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.009 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.274 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.275 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36557180 kB' 'MemAvailable: 41243464 kB' 'Buffers: 2696 kB' 'Cached: 18409280 kB' 'SwapCached: 0 kB' 'Active: 14410900 kB' 'Inactive: 4470784 kB' 'Active(anon): 13821740 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472876 kB' 'Mapped: 190252 kB' 'Shmem: 13352032 kB' 'KReclaimable: 239924 kB' 'Slab: 633400 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393476 kB' 'KernelStack: 12896 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.276 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.277 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.278 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:48.279 nr_hugepages=1536 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.279 resv_hugepages=0 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.279 surplus_hugepages=0 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.279 anon_hugepages=0 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36557816 kB' 'MemAvailable: 41244100 kB' 'Buffers: 2696 kB' 'Cached: 18409304 kB' 'SwapCached: 0 kB' 'Active: 14411128 kB' 'Inactive: 4470784 kB' 'Active(anon): 13821968 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473132 kB' 'Mapped: 190252 kB' 'Shmem: 13352056 kB' 'KReclaimable: 239924 kB' 'Slab: 633400 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393476 kB' 'KernelStack: 12912 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.279 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.280 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.281 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22307292 kB' 'MemUsed: 10522592 kB' 'SwapCached: 0 kB' 'Active: 8058456 kB' 'Inactive: 187176 kB' 'Active(anon): 7662300 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020844 kB' 'Mapped: 86148 kB' 'AnonPages: 227976 kB' 'Shmem: 7437512 kB' 'KernelStack: 8104 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332164 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.282 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.283 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14251264 kB' 'MemUsed: 13460580 kB' 'SwapCached: 0 kB' 'Active: 6353772 kB' 'Inactive: 4283608 kB' 'Active(anon): 6160768 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10391200 kB' 'Mapped: 104112 kB' 'AnonPages: 245912 kB' 'Shmem: 5914588 kB' 'KernelStack: 4872 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124240 kB' 'Slab: 301236 kB' 'SReclaimable: 124240 kB' 'SUnreclaim: 176996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.284 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:48.285 node0=512 expecting 512 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:48.285 node1=1024 expecting 1024 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:48.285 00:02:48.285 real 0m1.635s 00:02:48.285 user 0m0.683s 00:02:48.285 sys 0m0.919s 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:48.285 02:19:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:48.285 ************************************ 00:02:48.285 END TEST custom_alloc 00:02:48.285 ************************************ 00:02:48.285 02:19:35 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:48.285 02:19:35 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:48.285 02:19:35 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:48.285 02:19:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.285 ************************************ 00:02:48.285 START TEST no_shrink_alloc 00:02:48.285 ************************************ 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.285 02:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.656 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.656 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.656 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.656 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.656 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.656 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.656 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.656 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.656 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.656 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.656 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.656 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.656 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.656 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.656 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.656 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.656 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37465492 kB' 'MemAvailable: 42151776 kB' 'Buffers: 2696 kB' 'Cached: 18409392 kB' 'SwapCached: 0 kB' 'Active: 14411064 kB' 'Inactive: 4470784 kB' 'Active(anon): 13821904 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473084 kB' 'Mapped: 190304 kB' 'Shmem: 13352144 kB' 'KReclaimable: 239924 kB' 'Slab: 633580 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393656 kB' 'KernelStack: 12880 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.656 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.657 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37468292 kB' 'MemAvailable: 42154576 kB' 'Buffers: 2696 kB' 'Cached: 18409396 kB' 'SwapCached: 0 kB' 'Active: 14411600 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822440 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473668 kB' 'Mapped: 190304 kB' 'Shmem: 13352148 kB' 'KReclaimable: 239924 kB' 'Slab: 633564 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393640 kB' 'KernelStack: 12912 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.658 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.659 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.920 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37479636 kB' 'MemAvailable: 42165920 kB' 'Buffers: 2696 kB' 'Cached: 18409416 kB' 'SwapCached: 0 kB' 'Active: 14411168 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822008 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473164 kB' 'Mapped: 190228 kB' 'Shmem: 13352168 kB' 'KReclaimable: 239924 kB' 'Slab: 633548 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393624 kB' 'KernelStack: 12896 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.921 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.922 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.923 nr_hugepages=1024 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.923 resv_hugepages=0 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.923 surplus_hugepages=0 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.923 anon_hugepages=0 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37479384 kB' 'MemAvailable: 42165668 kB' 'Buffers: 2696 kB' 'Cached: 18409436 kB' 'SwapCached: 0 kB' 'Active: 14411276 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822116 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473232 kB' 'Mapped: 190228 kB' 'Shmem: 13352188 kB' 'KReclaimable: 239924 kB' 'Slab: 633548 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393624 kB' 'KernelStack: 12928 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.923 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.924 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21247116 kB' 'MemUsed: 11582768 kB' 'SwapCached: 0 kB' 'Active: 8057940 kB' 'Inactive: 187176 kB' 'Active(anon): 7661784 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020852 kB' 'Mapped: 86160 kB' 'AnonPages: 227480 kB' 'Shmem: 7437520 kB' 'KernelStack: 8040 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332320 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.925 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.926 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:49.927 node0=1024 expecting 1024 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.927 02:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.328 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:51.328 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.328 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:51.328 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:51.328 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:51.328 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:51.328 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:51.328 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:51.328 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.328 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:51.328 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:51.328 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:51.328 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:51.328 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:51.328 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:51.328 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:51.328 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.328 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37490108 kB' 'MemAvailable: 42176392 kB' 'Buffers: 2696 kB' 'Cached: 18409508 kB' 'SwapCached: 0 kB' 'Active: 14411760 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822600 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473616 kB' 'Mapped: 190340 kB' 'Shmem: 13352260 kB' 'KReclaimable: 239924 kB' 'Slab: 633572 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393648 kB' 'KernelStack: 12928 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.328 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.329 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37489696 kB' 'MemAvailable: 42175980 kB' 'Buffers: 2696 kB' 'Cached: 18409512 kB' 'SwapCached: 0 kB' 'Active: 14412084 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822924 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473932 kB' 'Mapped: 190312 kB' 'Shmem: 13352264 kB' 'KReclaimable: 239924 kB' 'Slab: 633580 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393656 kB' 'KernelStack: 12992 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.330 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.331 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37491248 kB' 'MemAvailable: 42177532 kB' 'Buffers: 2696 kB' 'Cached: 18409512 kB' 'SwapCached: 0 kB' 'Active: 14411312 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822152 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473112 kB' 'Mapped: 190236 kB' 'Shmem: 13352264 kB' 'KReclaimable: 239924 kB' 'Slab: 633564 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393640 kB' 'KernelStack: 12960 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.332 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.333 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.334 nr_hugepages=1024 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.334 resv_hugepages=0 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.334 surplus_hugepages=0 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.334 anon_hugepages=0 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.334 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37491248 kB' 'MemAvailable: 42177532 kB' 'Buffers: 2696 kB' 'Cached: 18409552 kB' 'SwapCached: 0 kB' 'Active: 14411624 kB' 'Inactive: 4470784 kB' 'Active(anon): 13822464 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473404 kB' 'Mapped: 190236 kB' 'Shmem: 13352304 kB' 'KReclaimable: 239924 kB' 'Slab: 633564 kB' 'SReclaimable: 239924 kB' 'SUnreclaim: 393640 kB' 'KernelStack: 12976 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14935976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2780764 kB' 'DirectMap2M: 19159040 kB' 'DirectMap1G: 47185920 kB' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.595 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.596 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21239420 kB' 'MemUsed: 11590464 kB' 'SwapCached: 0 kB' 'Active: 8058700 kB' 'Inactive: 187176 kB' 'Active(anon): 7662544 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8020864 kB' 'Mapped: 86168 kB' 'AnonPages: 228192 kB' 'Shmem: 7437532 kB' 'KernelStack: 8040 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 332376 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 216692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.597 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.598 node0=1024 expecting 1024 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.598 00:02:51.598 real 0m3.191s 00:02:51.598 user 0m1.264s 00:02:51.598 sys 0m1.863s 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:51.598 02:19:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:51.598 ************************************ 00:02:51.598 END TEST no_shrink_alloc 00:02:51.598 ************************************ 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:51.598 02:19:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:51.598 00:02:51.598 real 0m12.750s 00:02:51.598 user 0m4.838s 00:02:51.598 sys 0m6.741s 00:02:51.598 02:19:38 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:51.598 02:19:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.598 ************************************ 00:02:51.598 END TEST hugepages 00:02:51.598 ************************************ 00:02:51.598 02:19:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:51.598 02:19:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:51.598 02:19:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:51.598 02:19:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:51.598 ************************************ 00:02:51.598 START TEST driver 00:02:51.598 ************************************ 00:02:51.598 02:19:38 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:51.598 * Looking for test storage... 00:02:51.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.598 02:19:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:51.598 02:19:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.598 02:19:38 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.128 02:19:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:54.128 02:19:41 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:54.128 02:19:41 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:54.128 02:19:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:54.128 ************************************ 00:02:54.128 START TEST guess_driver 00:02:54.128 ************************************ 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:54.128 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:54.128 Looking for driver=vfio-pci 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.128 02:19:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.503 02:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.436 02:19:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:56.436 02:19:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:56.436 02:19:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.693 02:19:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:56.693 02:19:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:56.693 02:19:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.693 02:19:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.221 00:02:59.221 real 0m5.090s 00:02:59.221 user 0m1.235s 00:02:59.221 sys 0m1.997s 00:02:59.221 02:19:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:59.221 02:19:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:59.221 ************************************ 00:02:59.221 END TEST guess_driver 00:02:59.221 ************************************ 00:02:59.221 00:02:59.221 real 0m7.620s 00:02:59.221 user 0m1.798s 00:02:59.221 sys 0m3.099s 00:02:59.221 02:19:46 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:59.221 02:19:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:59.221 ************************************ 00:02:59.221 END TEST driver 00:02:59.221 ************************************ 00:02:59.221 02:19:46 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:59.221 02:19:46 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:59.221 02:19:46 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:59.221 02:19:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.221 ************************************ 00:02:59.221 START TEST devices 00:02:59.221 ************************************ 00:02:59.221 02:19:46 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:59.221 * Looking for test storage... 00:02:59.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.221 02:19:46 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:59.221 02:19:46 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:59.221 02:19:46 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.221 02:19:46 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.122 02:19:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:01.122 02:19:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:01.123 02:19:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:01.123 02:19:48 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:01.123 No valid GPT data, bailing 00:03:01.123 02:19:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.123 02:19:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:01.123 02:19:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:01.123 02:19:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:01.123 02:19:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:01.123 02:19:48 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:01.123 02:19:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:01.123 02:19:48 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.123 02:19:48 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.123 02:19:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:01.123 ************************************ 00:03:01.123 START TEST nvme_mount 00:03:01.123 ************************************ 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:01.123 02:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:02.056 Creating new GPT entries in memory. 00:03:02.056 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:02.056 other utilities. 00:03:02.056 02:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:02.056 02:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.056 02:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:02.056 02:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:02.056 02:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:02.990 Creating new GPT entries in memory. 00:03:02.990 The operation has completed successfully. 00:03:02.990 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2166131 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:02.991 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.249 02:19:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:04.624 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.624 02:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:04.882 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:04.882 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:04.882 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:04.882 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:04.882 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.883 02:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.256 02:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:07.630 02:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:07.630 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:07.630 00:03:07.630 real 0m6.635s 00:03:07.630 user 0m1.665s 00:03:07.630 sys 0m2.574s 00:03:07.631 02:19:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.631 02:19:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:07.631 ************************************ 00:03:07.631 END TEST nvme_mount 00:03:07.631 ************************************ 00:03:07.631 02:19:54 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:07.631 02:19:54 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:07.631 02:19:54 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.631 02:19:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:07.631 ************************************ 00:03:07.631 START TEST dm_mount 00:03:07.631 ************************************ 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:07.631 02:19:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:09.008 Creating new GPT entries in memory. 00:03:09.008 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:09.008 other utilities. 00:03:09.008 02:19:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:09.008 02:19:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.008 02:19:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:09.008 02:19:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:09.008 02:19:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:09.966 Creating new GPT entries in memory. 00:03:09.966 The operation has completed successfully. 00:03:09.966 02:19:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:09.966 02:19:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.966 02:19:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:09.966 02:19:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:09.966 02:19:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:10.901 The operation has completed successfully. 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2168766 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.901 02:19:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.274 02:19:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:13.655 02:20:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.913 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:13.914 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:13.914 00:03:13.914 real 0m6.138s 00:03:13.914 user 0m1.167s 00:03:13.914 sys 0m1.881s 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:13.914 02:20:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:13.914 ************************************ 00:03:13.914 END TEST dm_mount 00:03:13.914 ************************************ 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:13.914 02:20:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:14.172 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:14.172 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:14.172 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:14.172 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:14.172 02:20:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:14.172 00:03:14.172 real 0m14.911s 00:03:14.172 user 0m3.599s 00:03:14.172 sys 0m5.589s 00:03:14.172 02:20:01 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:14.172 02:20:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:14.172 ************************************ 00:03:14.172 END TEST devices 00:03:14.172 ************************************ 00:03:14.172 00:03:14.172 real 0m46.897s 00:03:14.172 user 0m13.918s 00:03:14.172 sys 0m21.586s 00:03:14.172 02:20:01 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:14.172 02:20:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.172 ************************************ 00:03:14.172 END TEST setup.sh 00:03:14.172 ************************************ 00:03:14.172 02:20:01 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:15.547 Hugepages 00:03:15.547 node hugesize free / total 00:03:15.547 node0 1048576kB 0 / 0 00:03:15.547 node0 2048kB 2048 / 2048 00:03:15.547 node1 1048576kB 0 / 0 00:03:15.547 node1 2048kB 0 / 0 00:03:15.547 00:03:15.547 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.547 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:15.547 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:15.547 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:15.547 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:15.547 02:20:02 -- spdk/autotest.sh@130 -- # uname -s 00:03:15.547 02:20:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:15.547 02:20:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:15.547 02:20:02 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.920 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:16.920 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:16.920 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:17.854 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.112 02:20:05 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:19.047 02:20:06 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:19.047 02:20:06 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:19.047 02:20:06 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:19.047 02:20:06 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:19.047 02:20:06 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:19.047 02:20:06 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:19.047 02:20:06 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:19.047 02:20:06 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:19.047 02:20:06 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:19.047 02:20:06 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:19.047 02:20:06 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:19.047 02:20:06 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.419 Waiting for block devices as requested 00:03:20.419 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:20.419 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:20.419 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:20.677 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:20.677 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:20.677 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:20.677 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:20.934 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:20.934 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:20.934 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:20.934 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:21.192 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:21.192 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:21.192 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:21.192 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:21.450 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:21.450 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:21.450 02:20:08 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:21.450 02:20:08 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:03:21.450 02:20:08 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:21.450 02:20:08 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:21.450 02:20:08 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:21.450 02:20:08 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:21.450 02:20:08 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:21.450 02:20:08 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:21.450 02:20:08 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:21.450 02:20:08 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:21.450 02:20:08 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:21.450 02:20:08 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:21.450 02:20:08 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:21.450 02:20:08 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:21.450 02:20:08 -- common/autotest_common.sh@1553 -- # continue 00:03:21.450 02:20:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:21.450 02:20:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.450 02:20:08 -- common/autotest_common.sh@10 -- # set +x 00:03:21.450 02:20:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:21.450 02:20:08 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:21.450 02:20:08 -- common/autotest_common.sh@10 -- # set +x 00:03:21.451 02:20:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.823 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:22.824 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:22.824 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:22.824 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:22.824 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:22.824 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:22.824 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:23.082 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:23.082 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:24.018 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:24.018 02:20:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:24.018 02:20:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.018 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:24.276 02:20:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:24.276 02:20:11 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:24.276 02:20:11 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:24.276 02:20:11 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:24.276 02:20:11 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:24.276 02:20:11 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:24.276 02:20:11 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:24.276 02:20:11 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:24.276 02:20:11 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.276 02:20:11 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.276 02:20:11 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:24.276 02:20:11 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:24.276 02:20:11 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:24.276 02:20:11 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:24.276 02:20:11 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:24.276 02:20:11 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:24.276 02:20:11 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:24.276 02:20:11 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:24.276 02:20:11 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:03:24.276 02:20:11 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:03:24.276 02:20:11 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2174661 00:03:24.276 02:20:11 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:24.276 02:20:11 -- common/autotest_common.sh@1594 -- # waitforlisten 2174661 00:03:24.276 02:20:11 -- common/autotest_common.sh@827 -- # '[' -z 2174661 ']' 00:03:24.276 02:20:11 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:24.276 02:20:11 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:24.276 02:20:11 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:24.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:24.276 02:20:11 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:24.276 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:24.276 [2024-05-15 02:20:11.563340] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:24.276 [2024-05-15 02:20:11.563446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174661 ] 00:03:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:03:24.276 [2024-05-15 02:20:11.637097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:24.561 [2024-05-15 02:20:11.754628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:25.127 02:20:12 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:25.127 02:20:12 -- common/autotest_common.sh@860 -- # return 0 00:03:25.127 02:20:12 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:25.127 02:20:12 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:25.127 02:20:12 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:28.411 nvme0n1 00:03:28.411 02:20:15 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:28.411 [2024-05-15 02:20:15.797310] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:28.411 [2024-05-15 02:20:15.797353] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:28.411 request: 00:03:28.411 { 00:03:28.411 "nvme_ctrlr_name": "nvme0", 00:03:28.411 "password": "test", 00:03:28.411 "method": "bdev_nvme_opal_revert", 00:03:28.411 "req_id": 1 00:03:28.411 } 00:03:28.411 Got JSON-RPC error response 00:03:28.411 response: 00:03:28.411 { 00:03:28.411 "code": -32603, 00:03:28.411 "message": "Internal error" 00:03:28.411 } 00:03:28.411 02:20:15 -- common/autotest_common.sh@1600 -- # true 00:03:28.411 02:20:15 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:28.411 02:20:15 -- common/autotest_common.sh@1604 -- # killprocess 2174661 00:03:28.411 02:20:15 -- common/autotest_common.sh@946 -- # '[' -z 2174661 ']' 00:03:28.411 02:20:15 -- common/autotest_common.sh@950 -- # kill -0 2174661 00:03:28.411 02:20:15 -- common/autotest_common.sh@951 -- # uname 00:03:28.411 02:20:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:28.411 02:20:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2174661 00:03:28.670 02:20:15 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:28.670 02:20:15 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:28.670 02:20:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2174661' 00:03:28.670 killing process with pid 2174661 00:03:28.670 02:20:15 -- common/autotest_common.sh@965 -- # kill 2174661 00:03:28.670 02:20:15 -- common/autotest_common.sh@970 -- # wait 2174661 00:03:30.571 02:20:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:30.571 02:20:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:30.571 02:20:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:30.571 02:20:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:30.571 02:20:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:30.571 02:20:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:30.571 02:20:17 -- common/autotest_common.sh@10 -- # set +x 00:03:30.571 02:20:17 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:30.571 02:20:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.572 02:20:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.572 02:20:17 -- common/autotest_common.sh@10 -- # set +x 00:03:30.572 ************************************ 00:03:30.572 START TEST env 00:03:30.572 ************************************ 00:03:30.572 02:20:17 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:30.572 * Looking for test storage... 00:03:30.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:30.572 02:20:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:30.572 02:20:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.572 02:20:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.572 02:20:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.572 ************************************ 00:03:30.572 START TEST env_memory 00:03:30.572 ************************************ 00:03:30.572 02:20:17 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:30.572 00:03:30.572 00:03:30.572 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.572 http://cunit.sourceforge.net/ 00:03:30.572 00:03:30.572 00:03:30.572 Suite: memory 00:03:30.572 Test: alloc and free memory map ...[2024-05-15 02:20:17.859992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:30.572 passed 00:03:30.572 Test: mem map translation ...[2024-05-15 02:20:17.880650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:30.572 [2024-05-15 02:20:17.880672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:30.572 [2024-05-15 02:20:17.880711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:30.572 [2024-05-15 02:20:17.880723] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:30.572 passed 00:03:30.572 Test: mem map registration ...[2024-05-15 02:20:17.921508] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:30.572 [2024-05-15 02:20:17.921528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:30.572 passed 00:03:30.572 Test: mem map adjacent registrations ...passed 00:03:30.572 00:03:30.572 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.572 suites 1 1 n/a 0 0 00:03:30.572 tests 4 4 4 0 0 00:03:30.572 asserts 152 152 152 0 n/a 00:03:30.572 00:03:30.572 Elapsed time = 0.141 seconds 00:03:30.572 00:03:30.572 real 0m0.148s 00:03:30.572 user 0m0.141s 00:03:30.572 sys 0m0.006s 00:03:30.572 02:20:17 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.572 02:20:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:30.572 ************************************ 00:03:30.572 END TEST env_memory 00:03:30.572 ************************************ 00:03:30.831 02:20:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:30.831 02:20:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.831 02:20:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.831 02:20:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.831 ************************************ 00:03:30.831 START TEST env_vtophys 00:03:30.831 ************************************ 00:03:30.831 02:20:18 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:30.831 EAL: lib.eal log level changed from notice to debug 00:03:30.831 EAL: Detected lcore 0 as core 0 on socket 0 00:03:30.831 EAL: Detected lcore 1 as core 1 on socket 0 00:03:30.831 EAL: Detected lcore 2 as core 2 on socket 0 00:03:30.831 EAL: Detected lcore 3 as core 3 on socket 0 00:03:30.831 EAL: Detected lcore 4 as core 4 on socket 0 00:03:30.831 EAL: Detected lcore 5 as core 5 on socket 0 00:03:30.831 EAL: Detected lcore 6 as core 8 on socket 0 00:03:30.831 EAL: Detected lcore 7 as core 9 on socket 0 00:03:30.831 EAL: Detected lcore 8 as core 10 on socket 0 00:03:30.831 EAL: Detected lcore 9 as core 11 on socket 0 00:03:30.831 EAL: Detected lcore 10 as core 12 on socket 0 00:03:30.831 EAL: Detected lcore 11 as core 13 on socket 0 00:03:30.831 EAL: Detected lcore 12 as core 0 on socket 1 00:03:30.831 EAL: Detected lcore 13 as core 1 on socket 1 00:03:30.831 EAL: Detected lcore 14 as core 2 on socket 1 00:03:30.831 EAL: Detected lcore 15 as core 3 on socket 1 00:03:30.831 EAL: Detected lcore 16 as core 4 on socket 1 00:03:30.831 EAL: Detected lcore 17 as core 5 on socket 1 00:03:30.831 EAL: Detected lcore 18 as core 8 on socket 1 00:03:30.831 EAL: Detected lcore 19 as core 9 on socket 1 00:03:30.831 EAL: Detected lcore 20 as core 10 on socket 1 00:03:30.831 EAL: Detected lcore 21 as core 11 on socket 1 00:03:30.831 EAL: Detected lcore 22 as core 12 on socket 1 00:03:30.831 EAL: Detected lcore 23 as core 13 on socket 1 00:03:30.831 EAL: Detected lcore 24 as core 0 on socket 0 00:03:30.831 EAL: Detected lcore 25 as core 1 on socket 0 00:03:30.831 EAL: Detected lcore 26 as core 2 on socket 0 00:03:30.831 EAL: Detected lcore 27 as core 3 on socket 0 00:03:30.831 EAL: Detected lcore 28 as core 4 on socket 0 00:03:30.831 EAL: Detected lcore 29 as core 5 on socket 0 00:03:30.831 EAL: Detected lcore 30 as core 8 on socket 0 00:03:30.831 EAL: Detected lcore 31 as core 9 on socket 0 00:03:30.831 EAL: Detected lcore 32 as core 10 on socket 0 00:03:30.831 EAL: Detected lcore 33 as core 11 on socket 0 00:03:30.831 EAL: Detected lcore 34 as core 12 on socket 0 00:03:30.831 EAL: Detected lcore 35 as core 13 on socket 0 00:03:30.831 EAL: Detected lcore 36 as core 0 on socket 1 00:03:30.831 EAL: Detected lcore 37 as core 1 on socket 1 00:03:30.831 EAL: Detected lcore 38 as core 2 on socket 1 00:03:30.831 EAL: Detected lcore 39 as core 3 on socket 1 00:03:30.831 EAL: Detected lcore 40 as core 4 on socket 1 00:03:30.831 EAL: Detected lcore 41 as core 5 on socket 1 00:03:30.831 EAL: Detected lcore 42 as core 8 on socket 1 00:03:30.831 EAL: Detected lcore 43 as core 9 on socket 1 00:03:30.831 EAL: Detected lcore 44 as core 10 on socket 1 00:03:30.831 EAL: Detected lcore 45 as core 11 on socket 1 00:03:30.831 EAL: Detected lcore 46 as core 12 on socket 1 00:03:30.831 EAL: Detected lcore 47 as core 13 on socket 1 00:03:30.831 EAL: Maximum logical cores by configuration: 128 00:03:30.831 EAL: Detected CPU lcores: 48 00:03:30.831 EAL: Detected NUMA nodes: 2 00:03:30.831 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:30.831 EAL: Detected shared linkage of DPDK 00:03:30.831 EAL: No shared files mode enabled, IPC will be disabled 00:03:30.831 EAL: Bus pci wants IOVA as 'DC' 00:03:30.831 EAL: Buses did not request a specific IOVA mode. 00:03:30.831 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:30.831 EAL: Selected IOVA mode 'VA' 00:03:30.831 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.831 EAL: Probing VFIO support... 00:03:30.831 EAL: IOMMU type 1 (Type 1) is supported 00:03:30.831 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:30.831 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:30.831 EAL: VFIO support initialized 00:03:30.831 EAL: Ask a virtual area of 0x2e000 bytes 00:03:30.831 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:30.831 EAL: Setting up physically contiguous memory... 00:03:30.831 EAL: Setting maximum number of open files to 524288 00:03:30.831 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:30.831 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:30.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:30.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:30.831 EAL: Ask a virtual area of 0x61000 bytes 00:03:30.831 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:30.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:30.831 EAL: Ask a virtual area of 0x400000000 bytes 00:03:30.831 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:30.831 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:30.831 EAL: Hugepages will be freed exactly as allocated. 00:03:30.831 EAL: No shared files mode enabled, IPC is disabled 00:03:30.831 EAL: No shared files mode enabled, IPC is disabled 00:03:30.831 EAL: TSC frequency is ~2700000 KHz 00:03:30.832 EAL: Main lcore 0 is ready (tid=7f2a4ef3da00;cpuset=[0]) 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 0 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 2MB 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:30.832 EAL: Mem event callback 'spdk:(nil)' registered 00:03:30.832 00:03:30.832 00:03:30.832 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.832 http://cunit.sourceforge.net/ 00:03:30.832 00:03:30.832 00:03:30.832 Suite: components_suite 00:03:30.832 Test: vtophys_malloc_test ...passed 00:03:30.832 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 4MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was shrunk by 4MB 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 6MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was shrunk by 6MB 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 10MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was shrunk by 10MB 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 18MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was shrunk by 18MB 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 34MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was shrunk by 34MB 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 66MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was shrunk by 66MB 00:03:30.832 EAL: Trying to obtain current memory policy. 00:03:30.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.832 EAL: Restoring previous memory policy: 4 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.832 EAL: request: mp_malloc_sync 00:03:30.832 EAL: No shared files mode enabled, IPC is disabled 00:03:30.832 EAL: Heap on socket 0 was expanded by 130MB 00:03:30.832 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.090 EAL: request: mp_malloc_sync 00:03:31.090 EAL: No shared files mode enabled, IPC is disabled 00:03:31.090 EAL: Heap on socket 0 was shrunk by 130MB 00:03:31.090 EAL: Trying to obtain current memory policy. 00:03:31.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.090 EAL: Restoring previous memory policy: 4 00:03:31.090 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.090 EAL: request: mp_malloc_sync 00:03:31.090 EAL: No shared files mode enabled, IPC is disabled 00:03:31.090 EAL: Heap on socket 0 was expanded by 258MB 00:03:31.090 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.090 EAL: request: mp_malloc_sync 00:03:31.090 EAL: No shared files mode enabled, IPC is disabled 00:03:31.090 EAL: Heap on socket 0 was shrunk by 258MB 00:03:31.090 EAL: Trying to obtain current memory policy. 00:03:31.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.348 EAL: Restoring previous memory policy: 4 00:03:31.348 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.348 EAL: request: mp_malloc_sync 00:03:31.348 EAL: No shared files mode enabled, IPC is disabled 00:03:31.348 EAL: Heap on socket 0 was expanded by 514MB 00:03:31.348 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.607 EAL: request: mp_malloc_sync 00:03:31.607 EAL: No shared files mode enabled, IPC is disabled 00:03:31.607 EAL: Heap on socket 0 was shrunk by 514MB 00:03:31.607 EAL: Trying to obtain current memory policy. 00:03:31.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.865 EAL: Restoring previous memory policy: 4 00:03:31.865 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.865 EAL: request: mp_malloc_sync 00:03:31.865 EAL: No shared files mode enabled, IPC is disabled 00:03:31.865 EAL: Heap on socket 0 was expanded by 1026MB 00:03:32.124 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.124 EAL: request: mp_malloc_sync 00:03:32.124 EAL: No shared files mode enabled, IPC is disabled 00:03:32.125 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:32.125 passed 00:03:32.125 00:03:32.125 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.125 suites 1 1 n/a 0 0 00:03:32.125 tests 2 2 2 0 0 00:03:32.125 asserts 497 497 497 0 n/a 00:03:32.125 00:03:32.125 Elapsed time = 1.371 seconds 00:03:32.125 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.125 EAL: request: mp_malloc_sync 00:03:32.125 EAL: No shared files mode enabled, IPC is disabled 00:03:32.125 EAL: Heap on socket 0 was shrunk by 2MB 00:03:32.125 EAL: No shared files mode enabled, IPC is disabled 00:03:32.125 EAL: No shared files mode enabled, IPC is disabled 00:03:32.125 EAL: No shared files mode enabled, IPC is disabled 00:03:32.125 00:03:32.125 real 0m1.500s 00:03:32.125 user 0m0.848s 00:03:32.125 sys 0m0.620s 00:03:32.125 02:20:19 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:32.125 02:20:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:32.125 ************************************ 00:03:32.125 END TEST env_vtophys 00:03:32.125 ************************************ 00:03:32.383 02:20:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:32.383 02:20:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:32.383 02:20:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:32.383 02:20:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.383 ************************************ 00:03:32.383 START TEST env_pci 00:03:32.383 ************************************ 00:03:32.383 02:20:19 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:32.383 00:03:32.383 00:03:32.383 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.383 http://cunit.sourceforge.net/ 00:03:32.383 00:03:32.383 00:03:32.383 Suite: pci 00:03:32.383 Test: pci_hook ...[2024-05-15 02:20:19.588912] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2175682 has claimed it 00:03:32.383 EAL: Cannot find device (10000:00:01.0) 00:03:32.383 EAL: Failed to attach device on primary process 00:03:32.383 passed 00:03:32.383 00:03:32.383 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.383 suites 1 1 n/a 0 0 00:03:32.383 tests 1 1 1 0 0 00:03:32.383 asserts 25 25 25 0 n/a 00:03:32.383 00:03:32.383 Elapsed time = 0.027 seconds 00:03:32.383 00:03:32.383 real 0m0.040s 00:03:32.383 user 0m0.016s 00:03:32.383 sys 0m0.023s 00:03:32.383 02:20:19 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:32.383 02:20:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:32.383 ************************************ 00:03:32.383 END TEST env_pci 00:03:32.383 ************************************ 00:03:32.383 02:20:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:32.383 02:20:19 env -- env/env.sh@15 -- # uname 00:03:32.383 02:20:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:32.383 02:20:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:32.383 02:20:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:32.383 02:20:19 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:32.383 02:20:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:32.383 02:20:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.383 ************************************ 00:03:32.383 START TEST env_dpdk_post_init 00:03:32.383 ************************************ 00:03:32.383 02:20:19 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:32.383 EAL: Detected CPU lcores: 48 00:03:32.383 EAL: Detected NUMA nodes: 2 00:03:32.383 EAL: Detected shared linkage of DPDK 00:03:32.383 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:32.383 EAL: Selected IOVA mode 'VA' 00:03:32.383 EAL: No free 2048 kB hugepages reported on node 1 00:03:32.383 EAL: VFIO support initialized 00:03:32.383 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.642 EAL: Using IOMMU type 1 (Type 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:32.642 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:33.576 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:36.857 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:36.857 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:36.857 Starting DPDK initialization... 00:03:36.857 Starting SPDK post initialization... 00:03:36.857 SPDK NVMe probe 00:03:36.857 Attaching to 0000:88:00.0 00:03:36.857 Attached to 0000:88:00.0 00:03:36.857 Cleaning up... 00:03:36.857 00:03:36.857 real 0m4.400s 00:03:36.857 user 0m3.259s 00:03:36.857 sys 0m0.198s 00:03:36.857 02:20:24 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:36.857 02:20:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:36.857 ************************************ 00:03:36.857 END TEST env_dpdk_post_init 00:03:36.857 ************************************ 00:03:36.857 02:20:24 env -- env/env.sh@26 -- # uname 00:03:36.857 02:20:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:36.857 02:20:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:36.857 02:20:24 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:36.857 02:20:24 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:36.857 02:20:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.857 ************************************ 00:03:36.857 START TEST env_mem_callbacks 00:03:36.857 ************************************ 00:03:36.857 02:20:24 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:36.857 EAL: Detected CPU lcores: 48 00:03:36.857 EAL: Detected NUMA nodes: 2 00:03:36.857 EAL: Detected shared linkage of DPDK 00:03:36.857 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:36.857 EAL: Selected IOVA mode 'VA' 00:03:36.857 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.857 EAL: VFIO support initialized 00:03:36.857 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:36.857 00:03:36.857 00:03:36.857 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.857 http://cunit.sourceforge.net/ 00:03:36.857 00:03:36.857 00:03:36.857 Suite: memory 00:03:36.857 Test: test ... 00:03:36.857 register 0x200000200000 2097152 00:03:36.857 malloc 3145728 00:03:36.857 register 0x200000400000 4194304 00:03:36.858 buf 0x200000500000 len 3145728 PASSED 00:03:36.858 malloc 64 00:03:36.858 buf 0x2000004fff40 len 64 PASSED 00:03:36.858 malloc 4194304 00:03:36.858 register 0x200000800000 6291456 00:03:36.858 buf 0x200000a00000 len 4194304 PASSED 00:03:36.858 free 0x200000500000 3145728 00:03:36.858 free 0x2000004fff40 64 00:03:36.858 unregister 0x200000400000 4194304 PASSED 00:03:36.858 free 0x200000a00000 4194304 00:03:36.858 unregister 0x200000800000 6291456 PASSED 00:03:36.858 malloc 8388608 00:03:36.858 register 0x200000400000 10485760 00:03:36.858 buf 0x200000600000 len 8388608 PASSED 00:03:36.858 free 0x200000600000 8388608 00:03:36.858 unregister 0x200000400000 10485760 PASSED 00:03:36.858 passed 00:03:36.858 00:03:36.858 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.858 suites 1 1 n/a 0 0 00:03:36.858 tests 1 1 1 0 0 00:03:36.858 asserts 15 15 15 0 n/a 00:03:36.858 00:03:36.858 Elapsed time = 0.005 seconds 00:03:36.858 00:03:36.858 real 0m0.054s 00:03:36.858 user 0m0.015s 00:03:36.858 sys 0m0.039s 00:03:36.858 02:20:24 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:36.858 02:20:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:36.858 ************************************ 00:03:36.858 END TEST env_mem_callbacks 00:03:36.858 ************************************ 00:03:36.858 00:03:36.858 real 0m6.464s 00:03:36.858 user 0m4.418s 00:03:36.858 sys 0m1.076s 00:03:36.858 02:20:24 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:36.858 02:20:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.858 ************************************ 00:03:36.858 END TEST env 00:03:36.858 ************************************ 00:03:36.858 02:20:24 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:36.858 02:20:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:36.858 02:20:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:36.858 02:20:24 -- common/autotest_common.sh@10 -- # set +x 00:03:36.858 ************************************ 00:03:36.858 START TEST rpc 00:03:36.858 ************************************ 00:03:36.858 02:20:24 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:37.116 * Looking for test storage... 00:03:37.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.116 02:20:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2176335 00:03:37.116 02:20:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:37.116 02:20:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.116 02:20:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2176335 00:03:37.116 02:20:24 rpc -- common/autotest_common.sh@827 -- # '[' -z 2176335 ']' 00:03:37.116 02:20:24 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.116 02:20:24 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:37.116 02:20:24 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.116 02:20:24 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:37.116 02:20:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.116 [2024-05-15 02:20:24.366444] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:37.116 [2024-05-15 02:20:24.366547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176335 ] 00:03:37.116 EAL: No free 2048 kB hugepages reported on node 1 00:03:37.116 [2024-05-15 02:20:24.434478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.374 [2024-05-15 02:20:24.541228] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:37.374 [2024-05-15 02:20:24.541290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2176335' to capture a snapshot of events at runtime. 00:03:37.374 [2024-05-15 02:20:24.541306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:37.374 [2024-05-15 02:20:24.541319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:37.374 [2024-05-15 02:20:24.541331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2176335 for offline analysis/debug. 00:03:37.374 [2024-05-15 02:20:24.541370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.941 02:20:25 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:37.941 02:20:25 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:37.941 02:20:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.941 02:20:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.941 02:20:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:37.941 02:20:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:37.941 02:20:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:37.941 02:20:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:37.941 02:20:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.941 ************************************ 00:03:37.941 START TEST rpc_integrity 00:03:37.941 ************************************ 00:03:37.941 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:37.941 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:37.941 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:37.941 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.941 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:37.941 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:37.941 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:38.200 { 00:03:38.200 "name": "Malloc0", 00:03:38.200 "aliases": [ 00:03:38.200 "0edb6338-3de4-402d-9b67-f6b1f059b566" 00:03:38.200 ], 00:03:38.200 "product_name": "Malloc disk", 00:03:38.200 "block_size": 512, 00:03:38.200 "num_blocks": 16384, 00:03:38.200 "uuid": "0edb6338-3de4-402d-9b67-f6b1f059b566", 00:03:38.200 "assigned_rate_limits": { 00:03:38.200 "rw_ios_per_sec": 0, 00:03:38.200 "rw_mbytes_per_sec": 0, 00:03:38.200 "r_mbytes_per_sec": 0, 00:03:38.200 "w_mbytes_per_sec": 0 00:03:38.200 }, 00:03:38.200 "claimed": false, 00:03:38.200 "zoned": false, 00:03:38.200 "supported_io_types": { 00:03:38.200 "read": true, 00:03:38.200 "write": true, 00:03:38.200 "unmap": true, 00:03:38.200 "write_zeroes": true, 00:03:38.200 "flush": true, 00:03:38.200 "reset": true, 00:03:38.200 "compare": false, 00:03:38.200 "compare_and_write": false, 00:03:38.200 "abort": true, 00:03:38.200 "nvme_admin": false, 00:03:38.200 "nvme_io": false 00:03:38.200 }, 00:03:38.200 "memory_domains": [ 00:03:38.200 { 00:03:38.200 "dma_device_id": "system", 00:03:38.200 "dma_device_type": 1 00:03:38.200 }, 00:03:38.200 { 00:03:38.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.200 "dma_device_type": 2 00:03:38.200 } 00:03:38.200 ], 00:03:38.200 "driver_specific": {} 00:03:38.200 } 00:03:38.200 ]' 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 [2024-05-15 02:20:25.429521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:38.200 [2024-05-15 02:20:25.429570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:38.200 [2024-05-15 02:20:25.429594] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15b9b50 00:03:38.200 [2024-05-15 02:20:25.429609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:38.200 [2024-05-15 02:20:25.431086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:38.200 [2024-05-15 02:20:25.431114] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:38.200 Passthru0 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:38.200 { 00:03:38.200 "name": "Malloc0", 00:03:38.200 "aliases": [ 00:03:38.200 "0edb6338-3de4-402d-9b67-f6b1f059b566" 00:03:38.200 ], 00:03:38.200 "product_name": "Malloc disk", 00:03:38.200 "block_size": 512, 00:03:38.200 "num_blocks": 16384, 00:03:38.200 "uuid": "0edb6338-3de4-402d-9b67-f6b1f059b566", 00:03:38.200 "assigned_rate_limits": { 00:03:38.200 "rw_ios_per_sec": 0, 00:03:38.200 "rw_mbytes_per_sec": 0, 00:03:38.200 "r_mbytes_per_sec": 0, 00:03:38.200 "w_mbytes_per_sec": 0 00:03:38.200 }, 00:03:38.200 "claimed": true, 00:03:38.200 "claim_type": "exclusive_write", 00:03:38.200 "zoned": false, 00:03:38.200 "supported_io_types": { 00:03:38.200 "read": true, 00:03:38.200 "write": true, 00:03:38.200 "unmap": true, 00:03:38.200 "write_zeroes": true, 00:03:38.200 "flush": true, 00:03:38.200 "reset": true, 00:03:38.200 "compare": false, 00:03:38.200 "compare_and_write": false, 00:03:38.200 "abort": true, 00:03:38.200 "nvme_admin": false, 00:03:38.200 "nvme_io": false 00:03:38.200 }, 00:03:38.200 "memory_domains": [ 00:03:38.200 { 00:03:38.200 "dma_device_id": "system", 00:03:38.200 "dma_device_type": 1 00:03:38.200 }, 00:03:38.200 { 00:03:38.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.200 "dma_device_type": 2 00:03:38.200 } 00:03:38.200 ], 00:03:38.200 "driver_specific": {} 00:03:38.200 }, 00:03:38.200 { 00:03:38.200 "name": "Passthru0", 00:03:38.200 "aliases": [ 00:03:38.200 "52f5ac67-502e-5617-ac9e-efbe293ad7b1" 00:03:38.200 ], 00:03:38.200 "product_name": "passthru", 00:03:38.200 "block_size": 512, 00:03:38.200 "num_blocks": 16384, 00:03:38.200 "uuid": "52f5ac67-502e-5617-ac9e-efbe293ad7b1", 00:03:38.200 "assigned_rate_limits": { 00:03:38.200 "rw_ios_per_sec": 0, 00:03:38.200 "rw_mbytes_per_sec": 0, 00:03:38.200 "r_mbytes_per_sec": 0, 00:03:38.200 "w_mbytes_per_sec": 0 00:03:38.200 }, 00:03:38.200 "claimed": false, 00:03:38.200 "zoned": false, 00:03:38.200 "supported_io_types": { 00:03:38.200 "read": true, 00:03:38.200 "write": true, 00:03:38.200 "unmap": true, 00:03:38.200 "write_zeroes": true, 00:03:38.200 "flush": true, 00:03:38.200 "reset": true, 00:03:38.200 "compare": false, 00:03:38.200 "compare_and_write": false, 00:03:38.200 "abort": true, 00:03:38.200 "nvme_admin": false, 00:03:38.200 "nvme_io": false 00:03:38.200 }, 00:03:38.200 "memory_domains": [ 00:03:38.200 { 00:03:38.200 "dma_device_id": "system", 00:03:38.200 "dma_device_type": 1 00:03:38.200 }, 00:03:38.200 { 00:03:38.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.200 "dma_device_type": 2 00:03:38.200 } 00:03:38.200 ], 00:03:38.200 "driver_specific": { 00:03:38.200 "passthru": { 00:03:38.200 "name": "Passthru0", 00:03:38.200 "base_bdev_name": "Malloc0" 00:03:38.200 } 00:03:38.200 } 00:03:38.200 } 00:03:38.200 ]' 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:38.200 02:20:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:38.200 00:03:38.200 real 0m0.230s 00:03:38.200 user 0m0.155s 00:03:38.200 sys 0m0.016s 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 ************************************ 00:03:38.200 END TEST rpc_integrity 00:03:38.200 ************************************ 00:03:38.200 02:20:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:38.200 02:20:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.200 02:20:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.200 02:20:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.200 ************************************ 00:03:38.200 START TEST rpc_plugins 00:03:38.200 ************************************ 00:03:38.200 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:38.200 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:38.200 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.200 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:38.459 { 00:03:38.459 "name": "Malloc1", 00:03:38.459 "aliases": [ 00:03:38.459 "fd217dca-4c35-4ec7-85e6-cfefd9b41495" 00:03:38.459 ], 00:03:38.459 "product_name": "Malloc disk", 00:03:38.459 "block_size": 4096, 00:03:38.459 "num_blocks": 256, 00:03:38.459 "uuid": "fd217dca-4c35-4ec7-85e6-cfefd9b41495", 00:03:38.459 "assigned_rate_limits": { 00:03:38.459 "rw_ios_per_sec": 0, 00:03:38.459 "rw_mbytes_per_sec": 0, 00:03:38.459 "r_mbytes_per_sec": 0, 00:03:38.459 "w_mbytes_per_sec": 0 00:03:38.459 }, 00:03:38.459 "claimed": false, 00:03:38.459 "zoned": false, 00:03:38.459 "supported_io_types": { 00:03:38.459 "read": true, 00:03:38.459 "write": true, 00:03:38.459 "unmap": true, 00:03:38.459 "write_zeroes": true, 00:03:38.459 "flush": true, 00:03:38.459 "reset": true, 00:03:38.459 "compare": false, 00:03:38.459 "compare_and_write": false, 00:03:38.459 "abort": true, 00:03:38.459 "nvme_admin": false, 00:03:38.459 "nvme_io": false 00:03:38.459 }, 00:03:38.459 "memory_domains": [ 00:03:38.459 { 00:03:38.459 "dma_device_id": "system", 00:03:38.459 "dma_device_type": 1 00:03:38.459 }, 00:03:38.459 { 00:03:38.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.459 "dma_device_type": 2 00:03:38.459 } 00:03:38.459 ], 00:03:38.459 "driver_specific": {} 00:03:38.459 } 00:03:38.459 ]' 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:38.459 02:20:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:38.459 00:03:38.459 real 0m0.108s 00:03:38.459 user 0m0.073s 00:03:38.459 sys 0m0.007s 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.459 02:20:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 ************************************ 00:03:38.459 END TEST rpc_plugins 00:03:38.459 ************************************ 00:03:38.459 02:20:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:38.459 02:20:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.459 02:20:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.459 02:20:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 ************************************ 00:03:38.459 START TEST rpc_trace_cmd_test 00:03:38.459 ************************************ 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:38.459 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2176335", 00:03:38.459 "tpoint_group_mask": "0x8", 00:03:38.459 "iscsi_conn": { 00:03:38.459 "mask": "0x2", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "scsi": { 00:03:38.459 "mask": "0x4", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "bdev": { 00:03:38.459 "mask": "0x8", 00:03:38.459 "tpoint_mask": "0xffffffffffffffff" 00:03:38.459 }, 00:03:38.459 "nvmf_rdma": { 00:03:38.459 "mask": "0x10", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "nvmf_tcp": { 00:03:38.459 "mask": "0x20", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "ftl": { 00:03:38.459 "mask": "0x40", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "blobfs": { 00:03:38.459 "mask": "0x80", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "dsa": { 00:03:38.459 "mask": "0x200", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "thread": { 00:03:38.459 "mask": "0x400", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "nvme_pcie": { 00:03:38.459 "mask": "0x800", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "iaa": { 00:03:38.459 "mask": "0x1000", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "nvme_tcp": { 00:03:38.459 "mask": "0x2000", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "bdev_nvme": { 00:03:38.459 "mask": "0x4000", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 }, 00:03:38.459 "sock": { 00:03:38.459 "mask": "0x8000", 00:03:38.459 "tpoint_mask": "0x0" 00:03:38.459 } 00:03:38.459 }' 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:38.459 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:38.717 00:03:38.717 real 0m0.199s 00:03:38.717 user 0m0.176s 00:03:38.717 sys 0m0.013s 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.717 02:20:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.717 ************************************ 00:03:38.717 END TEST rpc_trace_cmd_test 00:03:38.717 ************************************ 00:03:38.717 02:20:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:38.717 02:20:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:38.717 02:20:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:38.717 02:20:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.717 02:20:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.717 02:20:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.717 ************************************ 00:03:38.717 START TEST rpc_daemon_integrity 00:03:38.717 ************************************ 00:03:38.717 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:38.717 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:38.717 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.717 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:38.718 { 00:03:38.718 "name": "Malloc2", 00:03:38.718 "aliases": [ 00:03:38.718 "70bb259c-dcdb-4b85-ab8c-a37be9a9fc86" 00:03:38.718 ], 00:03:38.718 "product_name": "Malloc disk", 00:03:38.718 "block_size": 512, 00:03:38.718 "num_blocks": 16384, 00:03:38.718 "uuid": "70bb259c-dcdb-4b85-ab8c-a37be9a9fc86", 00:03:38.718 "assigned_rate_limits": { 00:03:38.718 "rw_ios_per_sec": 0, 00:03:38.718 "rw_mbytes_per_sec": 0, 00:03:38.718 "r_mbytes_per_sec": 0, 00:03:38.718 "w_mbytes_per_sec": 0 00:03:38.718 }, 00:03:38.718 "claimed": false, 00:03:38.718 "zoned": false, 00:03:38.718 "supported_io_types": { 00:03:38.718 "read": true, 00:03:38.718 "write": true, 00:03:38.718 "unmap": true, 00:03:38.718 "write_zeroes": true, 00:03:38.718 "flush": true, 00:03:38.718 "reset": true, 00:03:38.718 "compare": false, 00:03:38.718 "compare_and_write": false, 00:03:38.718 "abort": true, 00:03:38.718 "nvme_admin": false, 00:03:38.718 "nvme_io": false 00:03:38.718 }, 00:03:38.718 "memory_domains": [ 00:03:38.718 { 00:03:38.718 "dma_device_id": "system", 00:03:38.718 "dma_device_type": 1 00:03:38.718 }, 00:03:38.718 { 00:03:38.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.718 "dma_device_type": 2 00:03:38.718 } 00:03:38.718 ], 00:03:38.718 "driver_specific": {} 00:03:38.718 } 00:03:38.718 ]' 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.718 [2024-05-15 02:20:26.115535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:38.718 [2024-05-15 02:20:26.115581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:38.718 [2024-05-15 02:20:26.115604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15bd260 00:03:38.718 [2024-05-15 02:20:26.115620] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:38.718 [2024-05-15 02:20:26.116968] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:38.718 [2024-05-15 02:20:26.117009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:38.718 Passthru0 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.718 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.976 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.976 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:38.976 { 00:03:38.976 "name": "Malloc2", 00:03:38.976 "aliases": [ 00:03:38.976 "70bb259c-dcdb-4b85-ab8c-a37be9a9fc86" 00:03:38.976 ], 00:03:38.976 "product_name": "Malloc disk", 00:03:38.976 "block_size": 512, 00:03:38.976 "num_blocks": 16384, 00:03:38.976 "uuid": "70bb259c-dcdb-4b85-ab8c-a37be9a9fc86", 00:03:38.976 "assigned_rate_limits": { 00:03:38.976 "rw_ios_per_sec": 0, 00:03:38.976 "rw_mbytes_per_sec": 0, 00:03:38.976 "r_mbytes_per_sec": 0, 00:03:38.976 "w_mbytes_per_sec": 0 00:03:38.976 }, 00:03:38.976 "claimed": true, 00:03:38.976 "claim_type": "exclusive_write", 00:03:38.976 "zoned": false, 00:03:38.976 "supported_io_types": { 00:03:38.976 "read": true, 00:03:38.976 "write": true, 00:03:38.976 "unmap": true, 00:03:38.976 "write_zeroes": true, 00:03:38.976 "flush": true, 00:03:38.977 "reset": true, 00:03:38.977 "compare": false, 00:03:38.977 "compare_and_write": false, 00:03:38.977 "abort": true, 00:03:38.977 "nvme_admin": false, 00:03:38.977 "nvme_io": false 00:03:38.977 }, 00:03:38.977 "memory_domains": [ 00:03:38.977 { 00:03:38.977 "dma_device_id": "system", 00:03:38.977 "dma_device_type": 1 00:03:38.977 }, 00:03:38.977 { 00:03:38.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.977 "dma_device_type": 2 00:03:38.977 } 00:03:38.977 ], 00:03:38.977 "driver_specific": {} 00:03:38.977 }, 00:03:38.977 { 00:03:38.977 "name": "Passthru0", 00:03:38.977 "aliases": [ 00:03:38.977 "ea28776e-8494-54f7-98ce-415b4c21b96f" 00:03:38.977 ], 00:03:38.977 "product_name": "passthru", 00:03:38.977 "block_size": 512, 00:03:38.977 "num_blocks": 16384, 00:03:38.977 "uuid": "ea28776e-8494-54f7-98ce-415b4c21b96f", 00:03:38.977 "assigned_rate_limits": { 00:03:38.977 "rw_ios_per_sec": 0, 00:03:38.977 "rw_mbytes_per_sec": 0, 00:03:38.977 "r_mbytes_per_sec": 0, 00:03:38.977 "w_mbytes_per_sec": 0 00:03:38.977 }, 00:03:38.977 "claimed": false, 00:03:38.977 "zoned": false, 00:03:38.977 "supported_io_types": { 00:03:38.977 "read": true, 00:03:38.977 "write": true, 00:03:38.977 "unmap": true, 00:03:38.977 "write_zeroes": true, 00:03:38.977 "flush": true, 00:03:38.977 "reset": true, 00:03:38.977 "compare": false, 00:03:38.977 "compare_and_write": false, 00:03:38.977 "abort": true, 00:03:38.977 "nvme_admin": false, 00:03:38.977 "nvme_io": false 00:03:38.977 }, 00:03:38.977 "memory_domains": [ 00:03:38.977 { 00:03:38.977 "dma_device_id": "system", 00:03:38.977 "dma_device_type": 1 00:03:38.977 }, 00:03:38.977 { 00:03:38.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.977 "dma_device_type": 2 00:03:38.977 } 00:03:38.977 ], 00:03:38.977 "driver_specific": { 00:03:38.977 "passthru": { 00:03:38.977 "name": "Passthru0", 00:03:38.977 "base_bdev_name": "Malloc2" 00:03:38.977 } 00:03:38.977 } 00:03:38.977 } 00:03:38.977 ]' 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:38.977 00:03:38.977 real 0m0.218s 00:03:38.977 user 0m0.144s 00:03:38.977 sys 0m0.020s 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.977 02:20:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.977 ************************************ 00:03:38.977 END TEST rpc_daemon_integrity 00:03:38.977 ************************************ 00:03:38.977 02:20:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:38.977 02:20:26 rpc -- rpc/rpc.sh@84 -- # killprocess 2176335 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@946 -- # '[' -z 2176335 ']' 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@950 -- # kill -0 2176335 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@951 -- # uname 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2176335 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2176335' 00:03:38.977 killing process with pid 2176335 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@965 -- # kill 2176335 00:03:38.977 02:20:26 rpc -- common/autotest_common.sh@970 -- # wait 2176335 00:03:39.543 00:03:39.543 real 0m2.487s 00:03:39.543 user 0m3.142s 00:03:39.543 sys 0m0.637s 00:03:39.543 02:20:26 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:39.543 02:20:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.543 ************************************ 00:03:39.543 END TEST rpc 00:03:39.543 ************************************ 00:03:39.544 02:20:26 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:39.544 02:20:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.544 02:20:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.544 02:20:26 -- common/autotest_common.sh@10 -- # set +x 00:03:39.544 ************************************ 00:03:39.544 START TEST skip_rpc 00:03:39.544 ************************************ 00:03:39.544 02:20:26 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:39.544 * Looking for test storage... 00:03:39.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.544 02:20:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.544 02:20:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:39.544 02:20:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:39.544 02:20:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.544 02:20:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.544 02:20:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.544 ************************************ 00:03:39.544 START TEST skip_rpc 00:03:39.544 ************************************ 00:03:39.544 02:20:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:39.544 02:20:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2176784 00:03:39.544 02:20:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:39.544 02:20:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.544 02:20:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:39.544 [2024-05-15 02:20:26.935579] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:39.544 [2024-05-15 02:20:26.935650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176784 ] 00:03:39.802 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.802 [2024-05-15 02:20:27.007179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.802 [2024-05-15 02:20:27.125476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2176784 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2176784 ']' 00:03:45.095 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2176784 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2176784 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2176784' 00:03:45.096 killing process with pid 2176784 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2176784 00:03:45.096 02:20:31 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2176784 00:03:45.096 00:03:45.096 real 0m5.509s 00:03:45.096 user 0m5.186s 00:03:45.096 sys 0m0.318s 00:03:45.096 02:20:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.096 02:20:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.096 ************************************ 00:03:45.096 END TEST skip_rpc 00:03:45.096 ************************************ 00:03:45.096 02:20:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:45.096 02:20:32 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.096 02:20:32 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.096 02:20:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.096 ************************************ 00:03:45.096 START TEST skip_rpc_with_json 00:03:45.096 ************************************ 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2177474 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2177474 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2177474 ']' 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:45.096 02:20:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.096 [2024-05-15 02:20:32.507613] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:45.096 [2024-05-15 02:20:32.507717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177474 ] 00:03:45.354 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.354 [2024-05-15 02:20:32.581444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.354 [2024-05-15 02:20:32.695760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.287 [2024-05-15 02:20:33.431468] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:46.287 request: 00:03:46.287 { 00:03:46.287 "trtype": "tcp", 00:03:46.287 "method": "nvmf_get_transports", 00:03:46.287 "req_id": 1 00:03:46.287 } 00:03:46.287 Got JSON-RPC error response 00:03:46.287 response: 00:03:46.287 { 00:03:46.287 "code": -19, 00:03:46.287 "message": "No such device" 00:03:46.287 } 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.287 [2024-05-15 02:20:33.439591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.287 02:20:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.287 { 00:03:46.287 "subsystems": [ 00:03:46.287 { 00:03:46.287 "subsystem": "vfio_user_target", 00:03:46.287 "config": null 00:03:46.287 }, 00:03:46.287 { 00:03:46.287 "subsystem": "keyring", 00:03:46.287 "config": [] 00:03:46.287 }, 00:03:46.287 { 00:03:46.287 "subsystem": "iobuf", 00:03:46.287 "config": [ 00:03:46.287 { 00:03:46.287 "method": "iobuf_set_options", 00:03:46.287 "params": { 00:03:46.287 "small_pool_count": 8192, 00:03:46.287 "large_pool_count": 1024, 00:03:46.287 "small_bufsize": 8192, 00:03:46.287 "large_bufsize": 135168 00:03:46.287 } 00:03:46.287 } 00:03:46.287 ] 00:03:46.287 }, 00:03:46.287 { 00:03:46.287 "subsystem": "sock", 00:03:46.288 "config": [ 00:03:46.288 { 00:03:46.288 "method": "sock_impl_set_options", 00:03:46.288 "params": { 00:03:46.288 "impl_name": "posix", 00:03:46.288 "recv_buf_size": 2097152, 00:03:46.288 "send_buf_size": 2097152, 00:03:46.288 "enable_recv_pipe": true, 00:03:46.288 "enable_quickack": false, 00:03:46.288 "enable_placement_id": 0, 00:03:46.288 "enable_zerocopy_send_server": true, 00:03:46.288 "enable_zerocopy_send_client": false, 00:03:46.288 "zerocopy_threshold": 0, 00:03:46.288 "tls_version": 0, 00:03:46.288 "enable_ktls": false 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "sock_impl_set_options", 00:03:46.288 "params": { 00:03:46.288 "impl_name": "ssl", 00:03:46.288 "recv_buf_size": 4096, 00:03:46.288 "send_buf_size": 4096, 00:03:46.288 "enable_recv_pipe": true, 00:03:46.288 "enable_quickack": false, 00:03:46.288 "enable_placement_id": 0, 00:03:46.288 "enable_zerocopy_send_server": true, 00:03:46.288 "enable_zerocopy_send_client": false, 00:03:46.288 "zerocopy_threshold": 0, 00:03:46.288 "tls_version": 0, 00:03:46.288 "enable_ktls": false 00:03:46.288 } 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "vmd", 00:03:46.288 "config": [] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "accel", 00:03:46.288 "config": [ 00:03:46.288 { 00:03:46.288 "method": "accel_set_options", 00:03:46.288 "params": { 00:03:46.288 "small_cache_size": 128, 00:03:46.288 "large_cache_size": 16, 00:03:46.288 "task_count": 2048, 00:03:46.288 "sequence_count": 2048, 00:03:46.288 "buf_count": 2048 00:03:46.288 } 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "bdev", 00:03:46.288 "config": [ 00:03:46.288 { 00:03:46.288 "method": "bdev_set_options", 00:03:46.288 "params": { 00:03:46.288 "bdev_io_pool_size": 65535, 00:03:46.288 "bdev_io_cache_size": 256, 00:03:46.288 "bdev_auto_examine": true, 00:03:46.288 "iobuf_small_cache_size": 128, 00:03:46.288 "iobuf_large_cache_size": 16 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "bdev_raid_set_options", 00:03:46.288 "params": { 00:03:46.288 "process_window_size_kb": 1024 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "bdev_iscsi_set_options", 00:03:46.288 "params": { 00:03:46.288 "timeout_sec": 30 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "bdev_nvme_set_options", 00:03:46.288 "params": { 00:03:46.288 "action_on_timeout": "none", 00:03:46.288 "timeout_us": 0, 00:03:46.288 "timeout_admin_us": 0, 00:03:46.288 "keep_alive_timeout_ms": 10000, 00:03:46.288 "arbitration_burst": 0, 00:03:46.288 "low_priority_weight": 0, 00:03:46.288 "medium_priority_weight": 0, 00:03:46.288 "high_priority_weight": 0, 00:03:46.288 "nvme_adminq_poll_period_us": 10000, 00:03:46.288 "nvme_ioq_poll_period_us": 0, 00:03:46.288 "io_queue_requests": 0, 00:03:46.288 "delay_cmd_submit": true, 00:03:46.288 "transport_retry_count": 4, 00:03:46.288 "bdev_retry_count": 3, 00:03:46.288 "transport_ack_timeout": 0, 00:03:46.288 "ctrlr_loss_timeout_sec": 0, 00:03:46.288 "reconnect_delay_sec": 0, 00:03:46.288 "fast_io_fail_timeout_sec": 0, 00:03:46.288 "disable_auto_failback": false, 00:03:46.288 "generate_uuids": false, 00:03:46.288 "transport_tos": 0, 00:03:46.288 "nvme_error_stat": false, 00:03:46.288 "rdma_srq_size": 0, 00:03:46.288 "io_path_stat": false, 00:03:46.288 "allow_accel_sequence": false, 00:03:46.288 "rdma_max_cq_size": 0, 00:03:46.288 "rdma_cm_event_timeout_ms": 0, 00:03:46.288 "dhchap_digests": [ 00:03:46.288 "sha256", 00:03:46.288 "sha384", 00:03:46.288 "sha512" 00:03:46.288 ], 00:03:46.288 "dhchap_dhgroups": [ 00:03:46.288 "null", 00:03:46.288 "ffdhe2048", 00:03:46.288 "ffdhe3072", 00:03:46.288 "ffdhe4096", 00:03:46.288 "ffdhe6144", 00:03:46.288 "ffdhe8192" 00:03:46.288 ] 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "bdev_nvme_set_hotplug", 00:03:46.288 "params": { 00:03:46.288 "period_us": 100000, 00:03:46.288 "enable": false 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "bdev_wait_for_examine" 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "scsi", 00:03:46.288 "config": null 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "scheduler", 00:03:46.288 "config": [ 00:03:46.288 { 00:03:46.288 "method": "framework_set_scheduler", 00:03:46.288 "params": { 00:03:46.288 "name": "static" 00:03:46.288 } 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "vhost_scsi", 00:03:46.288 "config": [] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "vhost_blk", 00:03:46.288 "config": [] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "ublk", 00:03:46.288 "config": [] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "nbd", 00:03:46.288 "config": [] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "nvmf", 00:03:46.288 "config": [ 00:03:46.288 { 00:03:46.288 "method": "nvmf_set_config", 00:03:46.288 "params": { 00:03:46.288 "discovery_filter": "match_any", 00:03:46.288 "admin_cmd_passthru": { 00:03:46.288 "identify_ctrlr": false 00:03:46.288 } 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "nvmf_set_max_subsystems", 00:03:46.288 "params": { 00:03:46.288 "max_subsystems": 1024 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "nvmf_set_crdt", 00:03:46.288 "params": { 00:03:46.288 "crdt1": 0, 00:03:46.288 "crdt2": 0, 00:03:46.288 "crdt3": 0 00:03:46.288 } 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "method": "nvmf_create_transport", 00:03:46.288 "params": { 00:03:46.288 "trtype": "TCP", 00:03:46.288 "max_queue_depth": 128, 00:03:46.288 "max_io_qpairs_per_ctrlr": 127, 00:03:46.288 "in_capsule_data_size": 4096, 00:03:46.288 "max_io_size": 131072, 00:03:46.288 "io_unit_size": 131072, 00:03:46.288 "max_aq_depth": 128, 00:03:46.288 "num_shared_buffers": 511, 00:03:46.288 "buf_cache_size": 4294967295, 00:03:46.288 "dif_insert_or_strip": false, 00:03:46.288 "zcopy": false, 00:03:46.288 "c2h_success": true, 00:03:46.288 "sock_priority": 0, 00:03:46.288 "abort_timeout_sec": 1, 00:03:46.288 "ack_timeout": 0, 00:03:46.288 "data_wr_pool_size": 0 00:03:46.288 } 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 }, 00:03:46.288 { 00:03:46.288 "subsystem": "iscsi", 00:03:46.288 "config": [ 00:03:46.288 { 00:03:46.288 "method": "iscsi_set_options", 00:03:46.288 "params": { 00:03:46.288 "node_base": "iqn.2016-06.io.spdk", 00:03:46.288 "max_sessions": 128, 00:03:46.288 "max_connections_per_session": 2, 00:03:46.288 "max_queue_depth": 64, 00:03:46.288 "default_time2wait": 2, 00:03:46.288 "default_time2retain": 20, 00:03:46.288 "first_burst_length": 8192, 00:03:46.288 "immediate_data": true, 00:03:46.288 "allow_duplicated_isid": false, 00:03:46.288 "error_recovery_level": 0, 00:03:46.288 "nop_timeout": 60, 00:03:46.288 "nop_in_interval": 30, 00:03:46.288 "disable_chap": false, 00:03:46.288 "require_chap": false, 00:03:46.288 "mutual_chap": false, 00:03:46.288 "chap_group": 0, 00:03:46.288 "max_large_datain_per_connection": 64, 00:03:46.288 "max_r2t_per_connection": 4, 00:03:46.288 "pdu_pool_size": 36864, 00:03:46.288 "immediate_data_pool_size": 16384, 00:03:46.288 "data_out_pool_size": 2048 00:03:46.288 } 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 } 00:03:46.288 ] 00:03:46.288 } 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2177474 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2177474 ']' 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2177474 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2177474 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2177474' 00:03:46.288 killing process with pid 2177474 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2177474 00:03:46.288 02:20:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2177474 00:03:46.859 02:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2177743 00:03:46.859 02:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.859 02:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2177743 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2177743 ']' 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2177743 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2177743 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2177743' 00:03:52.116 killing process with pid 2177743 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2177743 00:03:52.116 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2177743 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.375 00:03:52.375 real 0m7.122s 00:03:52.375 user 0m6.865s 00:03:52.375 sys 0m0.738s 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.375 ************************************ 00:03:52.375 END TEST skip_rpc_with_json 00:03:52.375 ************************************ 00:03:52.375 02:20:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:52.375 02:20:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:52.375 02:20:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:52.375 02:20:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.375 ************************************ 00:03:52.375 START TEST skip_rpc_with_delay 00:03:52.375 ************************************ 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:52.375 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.375 [2024-05-15 02:20:39.679255] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:52.376 [2024-05-15 02:20:39.679360] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:52.376 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:52.376 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:52.376 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:52.376 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:52.376 00:03:52.376 real 0m0.065s 00:03:52.376 user 0m0.047s 00:03:52.376 sys 0m0.018s 00:03:52.376 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.376 02:20:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:52.376 ************************************ 00:03:52.376 END TEST skip_rpc_with_delay 00:03:52.376 ************************************ 00:03:52.376 02:20:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:52.376 02:20:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:52.376 02:20:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:52.376 02:20:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:52.376 02:20:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:52.376 02:20:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.376 ************************************ 00:03:52.376 START TEST exit_on_failed_rpc_init 00:03:52.376 ************************************ 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2178456 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2178456 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2178456 ']' 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:52.376 02:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.635 [2024-05-15 02:20:39.793734] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:52.635 [2024-05-15 02:20:39.793807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178456 ] 00:03:52.635 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.635 [2024-05-15 02:20:39.866347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.635 [2024-05-15 02:20:39.982365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:53.571 02:20:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:53.571 [2024-05-15 02:20:40.795898] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:53.571 [2024-05-15 02:20:40.795998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178592 ] 00:03:53.571 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.571 [2024-05-15 02:20:40.868288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.830 [2024-05-15 02:20:40.986169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:53.830 [2024-05-15 02:20:40.986315] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:53.830 [2024-05-15 02:20:40.986337] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:53.830 [2024-05-15 02:20:40.986351] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2178456 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2178456 ']' 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2178456 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2178456 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2178456' 00:03:53.830 killing process with pid 2178456 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2178456 00:03:53.830 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2178456 00:03:54.398 00:03:54.398 real 0m1.873s 00:03:54.398 user 0m2.229s 00:03:54.398 sys 0m0.506s 00:03:54.398 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.398 02:20:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.398 ************************************ 00:03:54.398 END TEST exit_on_failed_rpc_init 00:03:54.398 ************************************ 00:03:54.398 02:20:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.398 00:03:54.398 real 0m14.832s 00:03:54.398 user 0m14.428s 00:03:54.398 sys 0m1.748s 00:03:54.398 02:20:41 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.398 02:20:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.398 ************************************ 00:03:54.398 END TEST skip_rpc 00:03:54.398 ************************************ 00:03:54.398 02:20:41 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.398 02:20:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.398 02:20:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.398 02:20:41 -- common/autotest_common.sh@10 -- # set +x 00:03:54.398 ************************************ 00:03:54.398 START TEST rpc_client 00:03:54.398 ************************************ 00:03:54.398 02:20:41 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.398 * Looking for test storage... 00:03:54.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:54.398 02:20:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:54.398 OK 00:03:54.398 02:20:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:54.398 00:03:54.398 real 0m0.067s 00:03:54.398 user 0m0.024s 00:03:54.398 sys 0m0.049s 00:03:54.398 02:20:41 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.398 02:20:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:54.398 ************************************ 00:03:54.398 END TEST rpc_client 00:03:54.398 ************************************ 00:03:54.398 02:20:41 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.398 02:20:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.398 02:20:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.398 02:20:41 -- common/autotest_common.sh@10 -- # set +x 00:03:54.398 ************************************ 00:03:54.398 START TEST json_config 00:03:54.398 ************************************ 00:03:54.398 02:20:41 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.657 02:20:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.657 02:20:41 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.658 02:20:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.658 02:20:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.658 02:20:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.658 02:20:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.658 02:20:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.658 02:20:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.658 02:20:41 json_config -- paths/export.sh@5 -- # export PATH 00:03:54.658 02:20:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@47 -- # : 0 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:54.658 02:20:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:54.658 INFO: JSON configuration test init 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.658 02:20:41 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:54.658 02:20:41 json_config -- json_config/common.sh@9 -- # local app=target 00:03:54.658 02:20:41 json_config -- json_config/common.sh@10 -- # shift 00:03:54.658 02:20:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:54.658 02:20:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:54.658 02:20:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:54.658 02:20:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.658 02:20:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.658 02:20:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2178842 00:03:54.658 02:20:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:54.658 02:20:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:54.658 Waiting for target to run... 00:03:54.658 02:20:41 json_config -- json_config/common.sh@25 -- # waitforlisten 2178842 /var/tmp/spdk_tgt.sock 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@827 -- # '[' -z 2178842 ']' 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:54.658 02:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.658 [2024-05-15 02:20:41.911108] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:03:54.658 [2024-05-15 02:20:41.911212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178842 ] 00:03:54.658 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.226 [2024-05-15 02:20:42.441488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.226 [2024-05-15 02:20:42.548865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.496 02:20:42 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:55.496 02:20:42 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:55.496 02:20:42 json_config -- json_config/common.sh@26 -- # echo '' 00:03:55.496 00:03:55.496 02:20:42 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:55.496 02:20:42 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:55.496 02:20:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:55.496 02:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.496 02:20:42 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:55.496 02:20:42 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:55.496 02:20:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.496 02:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.496 02:20:42 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:55.496 02:20:42 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:55.496 02:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:58.781 02:20:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:58.781 02:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:58.781 02:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:58.781 02:20:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:59.039 02:20:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.039 02:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:59.039 02:20:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:59.039 02:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:59.039 02:20:46 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.039 02:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.297 MallocForNvmf0 00:03:59.297 02:20:46 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.297 02:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.556 MallocForNvmf1 00:03:59.556 02:20:46 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.556 02:20:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.814 [2024-05-15 02:20:47.121547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.814 02:20:47 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.814 02:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.072 02:20:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.072 02:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.330 02:20:47 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.330 02:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.588 02:20:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.588 02:20:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.846 [2024-05-15 02:20:48.092277] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:00.846 [2024-05-15 02:20:48.092817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.846 02:20:48 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:00.846 02:20:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.846 02:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.846 02:20:48 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:00.846 02:20:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.846 02:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.846 02:20:48 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:00.846 02:20:48 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:00.846 02:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.112 MallocBdevForConfigChangeCheck 00:04:01.112 02:20:48 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:01.112 02:20:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.112 02:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.112 02:20:48 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:01.112 02:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.427 02:20:48 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:01.427 INFO: shutting down applications... 00:04:01.427 02:20:48 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:01.427 02:20:48 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:01.427 02:20:48 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:01.427 02:20:48 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:03.325 Calling clear_iscsi_subsystem 00:04:03.325 Calling clear_nvmf_subsystem 00:04:03.326 Calling clear_nbd_subsystem 00:04:03.326 Calling clear_ublk_subsystem 00:04:03.326 Calling clear_vhost_blk_subsystem 00:04:03.326 Calling clear_vhost_scsi_subsystem 00:04:03.326 Calling clear_bdev_subsystem 00:04:03.326 02:20:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:03.326 02:20:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:03.326 02:20:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:03.326 02:20:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.326 02:20:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:03.326 02:20:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:03.583 02:20:50 json_config -- json_config/json_config.sh@345 -- # break 00:04:03.584 02:20:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:03.584 02:20:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:03.584 02:20:50 json_config -- json_config/common.sh@31 -- # local app=target 00:04:03.584 02:20:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:03.584 02:20:50 json_config -- json_config/common.sh@35 -- # [[ -n 2178842 ]] 00:04:03.584 02:20:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2178842 00:04:03.584 [2024-05-15 02:20:50.845551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:03.584 02:20:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:03.584 02:20:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.584 02:20:50 json_config -- json_config/common.sh@41 -- # kill -0 2178842 00:04:03.584 02:20:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:04.150 02:20:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:04.150 02:20:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.150 02:20:51 json_config -- json_config/common.sh@41 -- # kill -0 2178842 00:04:04.150 02:20:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:04.150 02:20:51 json_config -- json_config/common.sh@43 -- # break 00:04:04.150 02:20:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:04.150 02:20:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:04.150 SPDK target shutdown done 00:04:04.150 02:20:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:04.150 INFO: relaunching applications... 00:04:04.150 02:20:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.150 02:20:51 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.150 02:20:51 json_config -- json_config/common.sh@10 -- # shift 00:04:04.150 02:20:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.150 02:20:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.150 02:20:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.150 02:20:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.150 02:20:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.150 02:20:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2180036 00:04:04.150 02:20:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.150 02:20:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.150 Waiting for target to run... 00:04:04.150 02:20:51 json_config -- json_config/common.sh@25 -- # waitforlisten 2180036 /var/tmp/spdk_tgt.sock 00:04:04.150 02:20:51 json_config -- common/autotest_common.sh@827 -- # '[' -z 2180036 ']' 00:04:04.150 02:20:51 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.150 02:20:51 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:04.150 02:20:51 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.150 02:20:51 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:04.150 02:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.150 [2024-05-15 02:20:51.403731] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:04.150 [2024-05-15 02:20:51.403816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180036 ] 00:04:04.150 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.408 [2024-05-15 02:20:51.782620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.667 [2024-05-15 02:20:51.879261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.951 [2024-05-15 02:20:54.918053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.951 [2024-05-15 02:20:54.950012] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:07.951 [2024-05-15 02:20:54.950462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.951 02:20:54 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:07.951 02:20:54 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:07.951 02:20:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:07.951 00:04:07.951 02:20:54 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:07.951 02:20:54 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:07.951 INFO: Checking if target configuration is the same... 00:04:07.951 02:20:54 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.951 02:20:54 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:07.951 02:20:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.951 + '[' 2 -ne 2 ']' 00:04:07.951 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:07.951 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:07.951 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.951 +++ basename /dev/fd/62 00:04:07.951 ++ mktemp /tmp/62.XXX 00:04:07.951 + tmp_file_1=/tmp/62.IIq 00:04:07.951 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.951 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.951 + tmp_file_2=/tmp/spdk_tgt_config.json.SRT 00:04:07.951 + ret=0 00:04:07.951 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.209 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.209 + diff -u /tmp/62.IIq /tmp/spdk_tgt_config.json.SRT 00:04:08.209 + echo 'INFO: JSON config files are the same' 00:04:08.209 INFO: JSON config files are the same 00:04:08.209 + rm /tmp/62.IIq /tmp/spdk_tgt_config.json.SRT 00:04:08.209 + exit 0 00:04:08.209 02:20:55 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:08.209 02:20:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:08.209 INFO: changing configuration and checking if this can be detected... 00:04:08.209 02:20:55 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.209 02:20:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.467 02:20:55 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.467 02:20:55 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:08.467 02:20:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.467 + '[' 2 -ne 2 ']' 00:04:08.467 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.467 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.467 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.467 +++ basename /dev/fd/62 00:04:08.467 ++ mktemp /tmp/62.XXX 00:04:08.467 + tmp_file_1=/tmp/62.MJi 00:04:08.467 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.467 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.467 + tmp_file_2=/tmp/spdk_tgt_config.json.IoN 00:04:08.467 + ret=0 00:04:08.467 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.724 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.724 + diff -u /tmp/62.MJi /tmp/spdk_tgt_config.json.IoN 00:04:08.724 + ret=1 00:04:08.724 + echo '=== Start of file: /tmp/62.MJi ===' 00:04:08.724 + cat /tmp/62.MJi 00:04:08.724 + echo '=== End of file: /tmp/62.MJi ===' 00:04:08.724 + echo '' 00:04:08.724 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IoN ===' 00:04:08.724 + cat /tmp/spdk_tgt_config.json.IoN 00:04:08.724 + echo '=== End of file: /tmp/spdk_tgt_config.json.IoN ===' 00:04:08.724 + echo '' 00:04:08.724 + rm /tmp/62.MJi /tmp/spdk_tgt_config.json.IoN 00:04:08.724 + exit 1 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:08.725 INFO: configuration change detected. 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:08.725 02:20:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:08.725 02:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@317 -- # [[ -n 2180036 ]] 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:08.725 02:20:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:08.725 02:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:08.725 02:20:56 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:08.725 02:20:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.725 02:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.983 02:20:56 json_config -- json_config/json_config.sh@323 -- # killprocess 2180036 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@946 -- # '[' -z 2180036 ']' 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@950 -- # kill -0 2180036 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@951 -- # uname 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2180036 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2180036' 00:04:08.983 killing process with pid 2180036 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@965 -- # kill 2180036 00:04:08.983 [2024-05-15 02:20:56.172074] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:08.983 02:20:56 json_config -- common/autotest_common.sh@970 -- # wait 2180036 00:04:10.885 02:20:57 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.885 02:20:57 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:10.885 02:20:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.885 02:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.885 02:20:57 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:10.885 02:20:57 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:10.885 INFO: Success 00:04:10.885 00:04:10.885 real 0m16.029s 00:04:10.885 user 0m17.861s 00:04:10.885 sys 0m2.110s 00:04:10.885 02:20:57 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.885 02:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.885 ************************************ 00:04:10.885 END TEST json_config 00:04:10.885 ************************************ 00:04:10.885 02:20:57 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.885 02:20:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:10.885 02:20:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:10.885 02:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:10.885 ************************************ 00:04:10.885 START TEST json_config_extra_key 00:04:10.885 ************************************ 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.886 02:20:57 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.886 02:20:57 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.886 02:20:57 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.886 02:20:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.886 02:20:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.886 02:20:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.886 02:20:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:10.886 02:20:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:10.886 02:20:57 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:10.886 INFO: launching applications... 00:04:10.886 02:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2181078 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.886 Waiting for target to run... 00:04:10.886 02:20:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2181078 /var/tmp/spdk_tgt.sock 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2181078 ']' 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:10.886 02:20:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:10.886 [2024-05-15 02:20:58.000402] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:10.886 [2024-05-15 02:20:58.000520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181078 ] 00:04:10.886 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.146 [2024-05-15 02:20:58.524868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.405 [2024-05-15 02:20:58.626495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.663 02:20:58 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:11.663 02:20:58 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:11.663 00:04:11.663 02:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:11.663 INFO: shutting down applications... 00:04:11.663 02:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2181078 ]] 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2181078 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2181078 00:04:11.663 02:20:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.231 02:20:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.231 02:20:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.231 02:20:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2181078 00:04:12.231 02:20:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2181078 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.800 02:20:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.800 SPDK target shutdown done 00:04:12.800 02:20:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:12.800 Success 00:04:12.800 00:04:12.800 real 0m2.045s 00:04:12.800 user 0m1.380s 00:04:12.800 sys 0m0.616s 00:04:12.800 02:20:59 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:12.800 02:20:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.800 ************************************ 00:04:12.800 END TEST json_config_extra_key 00:04:12.800 ************************************ 00:04:12.800 02:20:59 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.800 02:20:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.800 02:20:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.800 02:20:59 -- common/autotest_common.sh@10 -- # set +x 00:04:12.800 ************************************ 00:04:12.800 START TEST alias_rpc 00:04:12.800 ************************************ 00:04:12.800 02:20:59 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.800 * Looking for test storage... 00:04:12.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:12.800 02:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:12.800 02:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2181416 00:04:12.800 02:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2181416 00:04:12.800 02:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.800 02:21:00 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2181416 ']' 00:04:12.800 02:21:00 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.800 02:21:00 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:12.800 02:21:00 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.800 02:21:00 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:12.800 02:21:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.800 [2024-05-15 02:21:00.094881] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:12.800 [2024-05-15 02:21:00.095013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181416 ] 00:04:12.800 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.800 [2024-05-15 02:21:00.168671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.059 [2024-05-15 02:21:00.283332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.624 02:21:01 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:13.624 02:21:01 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:13.624 02:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:14.192 02:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2181416 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2181416 ']' 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2181416 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2181416 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2181416' 00:04:14.192 killing process with pid 2181416 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@965 -- # kill 2181416 00:04:14.192 02:21:01 alias_rpc -- common/autotest_common.sh@970 -- # wait 2181416 00:04:14.449 00:04:14.449 real 0m1.781s 00:04:14.449 user 0m2.004s 00:04:14.449 sys 0m0.500s 00:04:14.449 02:21:01 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.449 02:21:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.449 ************************************ 00:04:14.449 END TEST alias_rpc 00:04:14.449 ************************************ 00:04:14.449 02:21:01 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:14.449 02:21:01 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.449 02:21:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.449 02:21:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.449 02:21:01 -- common/autotest_common.sh@10 -- # set +x 00:04:14.449 ************************************ 00:04:14.449 START TEST spdkcli_tcp 00:04:14.449 ************************************ 00:04:14.449 02:21:01 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.707 * Looking for test storage... 00:04:14.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2181748 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:14.707 02:21:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2181748 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2181748 ']' 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:14.707 02:21:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.707 [2024-05-15 02:21:01.937329] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:14.708 [2024-05-15 02:21:01.937448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181748 ] 00:04:14.708 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.708 [2024-05-15 02:21:02.011551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.965 [2024-05-15 02:21:02.131286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.965 [2024-05-15 02:21:02.131290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.532 02:21:02 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:15.532 02:21:02 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:15.532 02:21:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2181844 00:04:15.532 02:21:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:15.532 02:21:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:15.790 [ 00:04:15.790 "bdev_malloc_delete", 00:04:15.790 "bdev_malloc_create", 00:04:15.790 "bdev_null_resize", 00:04:15.790 "bdev_null_delete", 00:04:15.790 "bdev_null_create", 00:04:15.790 "bdev_nvme_cuse_unregister", 00:04:15.790 "bdev_nvme_cuse_register", 00:04:15.790 "bdev_opal_new_user", 00:04:15.790 "bdev_opal_set_lock_state", 00:04:15.790 "bdev_opal_delete", 00:04:15.790 "bdev_opal_get_info", 00:04:15.790 "bdev_opal_create", 00:04:15.790 "bdev_nvme_opal_revert", 00:04:15.790 "bdev_nvme_opal_init", 00:04:15.790 "bdev_nvme_send_cmd", 00:04:15.790 "bdev_nvme_get_path_iostat", 00:04:15.790 "bdev_nvme_get_mdns_discovery_info", 00:04:15.790 "bdev_nvme_stop_mdns_discovery", 00:04:15.790 "bdev_nvme_start_mdns_discovery", 00:04:15.790 "bdev_nvme_set_multipath_policy", 00:04:15.790 "bdev_nvme_set_preferred_path", 00:04:15.790 "bdev_nvme_get_io_paths", 00:04:15.790 "bdev_nvme_remove_error_injection", 00:04:15.790 "bdev_nvme_add_error_injection", 00:04:15.790 "bdev_nvme_get_discovery_info", 00:04:15.790 "bdev_nvme_stop_discovery", 00:04:15.790 "bdev_nvme_start_discovery", 00:04:15.791 "bdev_nvme_get_controller_health_info", 00:04:15.791 "bdev_nvme_disable_controller", 00:04:15.791 "bdev_nvme_enable_controller", 00:04:15.791 "bdev_nvme_reset_controller", 00:04:15.791 "bdev_nvme_get_transport_statistics", 00:04:15.791 "bdev_nvme_apply_firmware", 00:04:15.791 "bdev_nvme_detach_controller", 00:04:15.791 "bdev_nvme_get_controllers", 00:04:15.791 "bdev_nvme_attach_controller", 00:04:15.791 "bdev_nvme_set_hotplug", 00:04:15.791 "bdev_nvme_set_options", 00:04:15.791 "bdev_passthru_delete", 00:04:15.791 "bdev_passthru_create", 00:04:15.791 "bdev_lvol_check_shallow_copy", 00:04:15.791 "bdev_lvol_start_shallow_copy", 00:04:15.791 "bdev_lvol_grow_lvstore", 00:04:15.791 "bdev_lvol_get_lvols", 00:04:15.791 "bdev_lvol_get_lvstores", 00:04:15.791 "bdev_lvol_delete", 00:04:15.791 "bdev_lvol_set_read_only", 00:04:15.791 "bdev_lvol_resize", 00:04:15.791 "bdev_lvol_decouple_parent", 00:04:15.791 "bdev_lvol_inflate", 00:04:15.791 "bdev_lvol_rename", 00:04:15.791 "bdev_lvol_clone_bdev", 00:04:15.791 "bdev_lvol_clone", 00:04:15.791 "bdev_lvol_snapshot", 00:04:15.791 "bdev_lvol_create", 00:04:15.791 "bdev_lvol_delete_lvstore", 00:04:15.791 "bdev_lvol_rename_lvstore", 00:04:15.791 "bdev_lvol_create_lvstore", 00:04:15.791 "bdev_raid_set_options", 00:04:15.791 "bdev_raid_remove_base_bdev", 00:04:15.791 "bdev_raid_add_base_bdev", 00:04:15.791 "bdev_raid_delete", 00:04:15.791 "bdev_raid_create", 00:04:15.791 "bdev_raid_get_bdevs", 00:04:15.791 "bdev_error_inject_error", 00:04:15.791 "bdev_error_delete", 00:04:15.791 "bdev_error_create", 00:04:15.791 "bdev_split_delete", 00:04:15.791 "bdev_split_create", 00:04:15.791 "bdev_delay_delete", 00:04:15.791 "bdev_delay_create", 00:04:15.791 "bdev_delay_update_latency", 00:04:15.791 "bdev_zone_block_delete", 00:04:15.791 "bdev_zone_block_create", 00:04:15.791 "blobfs_create", 00:04:15.791 "blobfs_detect", 00:04:15.791 "blobfs_set_cache_size", 00:04:15.791 "bdev_aio_delete", 00:04:15.791 "bdev_aio_rescan", 00:04:15.791 "bdev_aio_create", 00:04:15.791 "bdev_ftl_set_property", 00:04:15.791 "bdev_ftl_get_properties", 00:04:15.791 "bdev_ftl_get_stats", 00:04:15.791 "bdev_ftl_unmap", 00:04:15.791 "bdev_ftl_unload", 00:04:15.791 "bdev_ftl_delete", 00:04:15.791 "bdev_ftl_load", 00:04:15.791 "bdev_ftl_create", 00:04:15.791 "bdev_virtio_attach_controller", 00:04:15.791 "bdev_virtio_scsi_get_devices", 00:04:15.791 "bdev_virtio_detach_controller", 00:04:15.791 "bdev_virtio_blk_set_hotplug", 00:04:15.791 "bdev_iscsi_delete", 00:04:15.791 "bdev_iscsi_create", 00:04:15.791 "bdev_iscsi_set_options", 00:04:15.791 "accel_error_inject_error", 00:04:15.791 "ioat_scan_accel_module", 00:04:15.791 "dsa_scan_accel_module", 00:04:15.791 "iaa_scan_accel_module", 00:04:15.791 "vfu_virtio_create_scsi_endpoint", 00:04:15.791 "vfu_virtio_scsi_remove_target", 00:04:15.791 "vfu_virtio_scsi_add_target", 00:04:15.791 "vfu_virtio_create_blk_endpoint", 00:04:15.791 "vfu_virtio_delete_endpoint", 00:04:15.791 "keyring_file_remove_key", 00:04:15.791 "keyring_file_add_key", 00:04:15.791 "iscsi_get_histogram", 00:04:15.791 "iscsi_enable_histogram", 00:04:15.791 "iscsi_set_options", 00:04:15.791 "iscsi_get_auth_groups", 00:04:15.791 "iscsi_auth_group_remove_secret", 00:04:15.791 "iscsi_auth_group_add_secret", 00:04:15.791 "iscsi_delete_auth_group", 00:04:15.791 "iscsi_create_auth_group", 00:04:15.791 "iscsi_set_discovery_auth", 00:04:15.791 "iscsi_get_options", 00:04:15.791 "iscsi_target_node_request_logout", 00:04:15.791 "iscsi_target_node_set_redirect", 00:04:15.791 "iscsi_target_node_set_auth", 00:04:15.791 "iscsi_target_node_add_lun", 00:04:15.791 "iscsi_get_stats", 00:04:15.791 "iscsi_get_connections", 00:04:15.791 "iscsi_portal_group_set_auth", 00:04:15.791 "iscsi_start_portal_group", 00:04:15.791 "iscsi_delete_portal_group", 00:04:15.791 "iscsi_create_portal_group", 00:04:15.791 "iscsi_get_portal_groups", 00:04:15.791 "iscsi_delete_target_node", 00:04:15.791 "iscsi_target_node_remove_pg_ig_maps", 00:04:15.791 "iscsi_target_node_add_pg_ig_maps", 00:04:15.791 "iscsi_create_target_node", 00:04:15.791 "iscsi_get_target_nodes", 00:04:15.791 "iscsi_delete_initiator_group", 00:04:15.791 "iscsi_initiator_group_remove_initiators", 00:04:15.791 "iscsi_initiator_group_add_initiators", 00:04:15.791 "iscsi_create_initiator_group", 00:04:15.792 "iscsi_get_initiator_groups", 00:04:15.792 "nvmf_set_crdt", 00:04:15.792 "nvmf_set_config", 00:04:15.792 "nvmf_set_max_subsystems", 00:04:15.792 "nvmf_subsystem_get_listeners", 00:04:15.792 "nvmf_subsystem_get_qpairs", 00:04:15.792 "nvmf_subsystem_get_controllers", 00:04:15.792 "nvmf_get_stats", 00:04:15.792 "nvmf_get_transports", 00:04:15.792 "nvmf_create_transport", 00:04:15.792 "nvmf_get_targets", 00:04:15.792 "nvmf_delete_target", 00:04:15.792 "nvmf_create_target", 00:04:15.792 "nvmf_subsystem_allow_any_host", 00:04:15.792 "nvmf_subsystem_remove_host", 00:04:15.792 "nvmf_subsystem_add_host", 00:04:15.792 "nvmf_ns_remove_host", 00:04:15.792 "nvmf_ns_add_host", 00:04:15.792 "nvmf_subsystem_remove_ns", 00:04:15.792 "nvmf_subsystem_add_ns", 00:04:15.792 "nvmf_subsystem_listener_set_ana_state", 00:04:15.792 "nvmf_discovery_get_referrals", 00:04:15.792 "nvmf_discovery_remove_referral", 00:04:15.792 "nvmf_discovery_add_referral", 00:04:15.792 "nvmf_subsystem_remove_listener", 00:04:15.792 "nvmf_subsystem_add_listener", 00:04:15.792 "nvmf_delete_subsystem", 00:04:15.792 "nvmf_create_subsystem", 00:04:15.792 "nvmf_get_subsystems", 00:04:15.792 "env_dpdk_get_mem_stats", 00:04:15.792 "nbd_get_disks", 00:04:15.792 "nbd_stop_disk", 00:04:15.792 "nbd_start_disk", 00:04:15.792 "ublk_recover_disk", 00:04:15.792 "ublk_get_disks", 00:04:15.792 "ublk_stop_disk", 00:04:15.792 "ublk_start_disk", 00:04:15.792 "ublk_destroy_target", 00:04:15.792 "ublk_create_target", 00:04:15.792 "virtio_blk_create_transport", 00:04:15.792 "virtio_blk_get_transports", 00:04:15.792 "vhost_controller_set_coalescing", 00:04:15.792 "vhost_get_controllers", 00:04:15.792 "vhost_delete_controller", 00:04:15.792 "vhost_create_blk_controller", 00:04:15.792 "vhost_scsi_controller_remove_target", 00:04:15.792 "vhost_scsi_controller_add_target", 00:04:15.792 "vhost_start_scsi_controller", 00:04:15.792 "vhost_create_scsi_controller", 00:04:15.792 "thread_set_cpumask", 00:04:15.792 "framework_get_scheduler", 00:04:15.792 "framework_set_scheduler", 00:04:15.792 "framework_get_reactors", 00:04:15.792 "thread_get_io_channels", 00:04:15.792 "thread_get_pollers", 00:04:15.792 "thread_get_stats", 00:04:15.792 "framework_monitor_context_switch", 00:04:15.792 "spdk_kill_instance", 00:04:15.792 "log_enable_timestamps", 00:04:15.792 "log_get_flags", 00:04:15.792 "log_clear_flag", 00:04:15.792 "log_set_flag", 00:04:15.792 "log_get_level", 00:04:15.792 "log_set_level", 00:04:15.792 "log_get_print_level", 00:04:15.792 "log_set_print_level", 00:04:15.792 "framework_enable_cpumask_locks", 00:04:15.792 "framework_disable_cpumask_locks", 00:04:15.792 "framework_wait_init", 00:04:15.792 "framework_start_init", 00:04:15.792 "scsi_get_devices", 00:04:15.792 "bdev_get_histogram", 00:04:15.792 "bdev_enable_histogram", 00:04:15.792 "bdev_set_qos_limit", 00:04:15.792 "bdev_set_qd_sampling_period", 00:04:15.792 "bdev_get_bdevs", 00:04:15.792 "bdev_reset_iostat", 00:04:15.792 "bdev_get_iostat", 00:04:15.792 "bdev_examine", 00:04:15.792 "bdev_wait_for_examine", 00:04:15.792 "bdev_set_options", 00:04:15.792 "notify_get_notifications", 00:04:15.792 "notify_get_types", 00:04:15.792 "accel_get_stats", 00:04:15.792 "accel_set_options", 00:04:15.792 "accel_set_driver", 00:04:15.792 "accel_crypto_key_destroy", 00:04:15.792 "accel_crypto_keys_get", 00:04:15.792 "accel_crypto_key_create", 00:04:15.792 "accel_assign_opc", 00:04:15.792 "accel_get_module_info", 00:04:15.792 "accel_get_opc_assignments", 00:04:15.792 "vmd_rescan", 00:04:15.792 "vmd_remove_device", 00:04:15.792 "vmd_enable", 00:04:15.792 "sock_get_default_impl", 00:04:15.792 "sock_set_default_impl", 00:04:15.792 "sock_impl_set_options", 00:04:15.792 "sock_impl_get_options", 00:04:15.792 "iobuf_get_stats", 00:04:15.792 "iobuf_set_options", 00:04:15.792 "keyring_get_keys", 00:04:15.792 "framework_get_pci_devices", 00:04:15.792 "framework_get_config", 00:04:15.792 "framework_get_subsystems", 00:04:15.792 "vfu_tgt_set_base_path", 00:04:15.792 "trace_get_info", 00:04:15.792 "trace_get_tpoint_group_mask", 00:04:15.792 "trace_disable_tpoint_group", 00:04:15.792 "trace_enable_tpoint_group", 00:04:15.792 "trace_clear_tpoint_mask", 00:04:15.792 "trace_set_tpoint_mask", 00:04:15.792 "spdk_get_version", 00:04:15.793 "rpc_get_methods" 00:04:15.793 ] 00:04:15.793 02:21:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.793 02:21:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:15.793 02:21:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2181748 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2181748 ']' 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2181748 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2181748 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2181748' 00:04:15.793 killing process with pid 2181748 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2181748 00:04:15.793 02:21:03 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2181748 00:04:16.358 00:04:16.358 real 0m1.814s 00:04:16.358 user 0m3.430s 00:04:16.358 sys 0m0.513s 00:04:16.358 02:21:03 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.358 02:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.358 ************************************ 00:04:16.358 END TEST spdkcli_tcp 00:04:16.358 ************************************ 00:04:16.358 02:21:03 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.358 02:21:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.358 02:21:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.358 02:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:16.358 ************************************ 00:04:16.358 START TEST dpdk_mem_utility 00:04:16.358 ************************************ 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.358 * Looking for test storage... 00:04:16.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:16.358 02:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:16.358 02:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2182035 00:04:16.358 02:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.358 02:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2182035 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2182035 ']' 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:16.358 02:21:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.617 [2024-05-15 02:21:03.794730] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:16.617 [2024-05-15 02:21:03.794817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182035 ] 00:04:16.617 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.617 [2024-05-15 02:21:03.867737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.617 [2024-05-15 02:21:03.976386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.593 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:17.593 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:17.593 02:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:17.593 02:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:17.593 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.593 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:17.593 { 00:04:17.593 "filename": "/tmp/spdk_mem_dump.txt" 00:04:17.593 } 00:04:17.593 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.593 02:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:17.593 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:17.593 1 heaps totaling size 814.000000 MiB 00:04:17.593 size: 814.000000 MiB heap id: 0 00:04:17.593 end heaps---------- 00:04:17.593 8 mempools totaling size 598.116089 MiB 00:04:17.593 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:17.593 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:17.593 size: 84.521057 MiB name: bdev_io_2182035 00:04:17.593 size: 51.011292 MiB name: evtpool_2182035 00:04:17.593 size: 50.003479 MiB name: msgpool_2182035 00:04:17.593 size: 21.763794 MiB name: PDU_Pool 00:04:17.593 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:17.593 size: 0.026123 MiB name: Session_Pool 00:04:17.593 end mempools------- 00:04:17.593 6 memzones totaling size 4.142822 MiB 00:04:17.593 size: 1.000366 MiB name: RG_ring_0_2182035 00:04:17.593 size: 1.000366 MiB name: RG_ring_1_2182035 00:04:17.593 size: 1.000366 MiB name: RG_ring_4_2182035 00:04:17.593 size: 1.000366 MiB name: RG_ring_5_2182035 00:04:17.593 size: 0.125366 MiB name: RG_ring_2_2182035 00:04:17.593 size: 0.015991 MiB name: RG_ring_3_2182035 00:04:17.593 end memzones------- 00:04:17.593 02:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:17.593 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:17.593 list of free elements. size: 12.519348 MiB 00:04:17.593 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:17.593 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:17.594 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:17.594 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:17.594 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:17.594 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:17.594 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:17.594 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:17.594 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:17.594 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:17.594 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:17.594 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:17.594 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:17.594 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:17.594 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:17.594 list of standard malloc elements. size: 199.218079 MiB 00:04:17.594 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:17.594 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:17.594 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:17.594 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:17.594 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:17.594 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:17.594 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:17.594 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:17.594 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:17.594 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:17.594 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:17.594 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:17.594 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:17.594 list of memzone associated elements. size: 602.262573 MiB 00:04:17.594 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:17.594 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:17.594 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:17.594 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:17.594 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:17.594 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2182035_0 00:04:17.594 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:17.594 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2182035_0 00:04:17.594 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:17.594 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2182035_0 00:04:17.594 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:17.594 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:17.594 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:17.594 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:17.594 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:17.594 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2182035 00:04:17.594 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:17.594 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2182035 00:04:17.594 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:17.594 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2182035 00:04:17.594 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:17.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:17.594 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:17.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:17.594 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:17.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:17.594 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:17.594 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:17.594 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:17.594 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2182035 00:04:17.594 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:17.594 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2182035 00:04:17.594 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:17.594 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2182035 00:04:17.594 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:17.594 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2182035 00:04:17.594 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:17.594 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2182035 00:04:17.594 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:17.594 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:17.594 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:17.594 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:17.594 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:17.594 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:17.594 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:17.594 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2182035 00:04:17.594 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:17.594 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:17.594 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:17.594 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:17.594 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:17.594 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2182035 00:04:17.594 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:17.594 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:17.594 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:17.594 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2182035 00:04:17.594 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:17.594 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2182035 00:04:17.594 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:17.594 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:17.594 02:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:17.594 02:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2182035 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2182035 ']' 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2182035 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2182035 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2182035' 00:04:17.594 killing process with pid 2182035 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2182035 00:04:17.594 02:21:04 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2182035 00:04:18.159 00:04:18.159 real 0m1.651s 00:04:18.159 user 0m1.776s 00:04:18.159 sys 0m0.481s 00:04:18.159 02:21:05 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.159 02:21:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.159 ************************************ 00:04:18.159 END TEST dpdk_mem_utility 00:04:18.159 ************************************ 00:04:18.159 02:21:05 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:18.159 02:21:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.159 02:21:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.159 02:21:05 -- common/autotest_common.sh@10 -- # set +x 00:04:18.159 ************************************ 00:04:18.159 START TEST event 00:04:18.159 ************************************ 00:04:18.159 02:21:05 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:18.159 * Looking for test storage... 00:04:18.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:18.159 02:21:05 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:18.159 02:21:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:18.159 02:21:05 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.159 02:21:05 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:18.159 02:21:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.159 02:21:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.159 ************************************ 00:04:18.159 START TEST event_perf 00:04:18.159 ************************************ 00:04:18.159 02:21:05 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.159 Running I/O for 1 seconds...[2024-05-15 02:21:05.488026] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:18.159 [2024-05-15 02:21:05.488084] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182361 ] 00:04:18.159 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.159 [2024-05-15 02:21:05.561879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:18.416 [2024-05-15 02:21:05.681008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.416 [2024-05-15 02:21:05.681074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:18.416 [2024-05-15 02:21:05.681174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:18.416 [2024-05-15 02:21:05.681177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.803 Running I/O for 1 seconds... 00:04:19.803 lcore 0: 220330 00:04:19.803 lcore 1: 220329 00:04:19.803 lcore 2: 220330 00:04:19.803 lcore 3: 220330 00:04:19.803 done. 00:04:19.803 00:04:19.803 real 0m1.329s 00:04:19.803 user 0m4.231s 00:04:19.803 sys 0m0.094s 00:04:19.803 02:21:06 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:19.803 02:21:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:19.803 ************************************ 00:04:19.803 END TEST event_perf 00:04:19.803 ************************************ 00:04:19.803 02:21:06 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:19.803 02:21:06 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:19.803 02:21:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:19.803 02:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.803 ************************************ 00:04:19.803 START TEST event_reactor 00:04:19.803 ************************************ 00:04:19.803 02:21:06 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:19.803 [2024-05-15 02:21:06.876363] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:19.803 [2024-05-15 02:21:06.876435] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182516 ] 00:04:19.803 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.803 [2024-05-15 02:21:06.951029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.803 [2024-05-15 02:21:07.068612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.174 test_start 00:04:21.174 oneshot 00:04:21.174 tick 100 00:04:21.174 tick 100 00:04:21.174 tick 250 00:04:21.174 tick 100 00:04:21.174 tick 100 00:04:21.174 tick 100 00:04:21.174 tick 250 00:04:21.174 tick 500 00:04:21.174 tick 100 00:04:21.174 tick 100 00:04:21.174 tick 250 00:04:21.174 tick 100 00:04:21.174 tick 100 00:04:21.174 test_end 00:04:21.174 00:04:21.174 real 0m1.327s 00:04:21.174 user 0m1.229s 00:04:21.174 sys 0m0.092s 00:04:21.174 02:21:08 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.174 02:21:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:21.174 ************************************ 00:04:21.174 END TEST event_reactor 00:04:21.174 ************************************ 00:04:21.174 02:21:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.174 02:21:08 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:21.174 02:21:08 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.174 02:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.174 ************************************ 00:04:21.174 START TEST event_reactor_perf 00:04:21.174 ************************************ 00:04:21.174 02:21:08 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.174 [2024-05-15 02:21:08.258497] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:21.175 [2024-05-15 02:21:08.258562] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183035 ] 00:04:21.175 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.175 [2024-05-15 02:21:08.334327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.175 [2024-05-15 02:21:08.454361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.548 test_start 00:04:22.548 test_end 00:04:22.548 Performance: 357596 events per second 00:04:22.548 00:04:22.548 real 0m1.333s 00:04:22.548 user 0m1.236s 00:04:22.548 sys 0m0.090s 00:04:22.548 02:21:09 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.548 02:21:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.548 ************************************ 00:04:22.548 END TEST event_reactor_perf 00:04:22.548 ************************************ 00:04:22.548 02:21:09 event -- event/event.sh@49 -- # uname -s 00:04:22.548 02:21:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:22.549 02:21:09 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:22.549 02:21:09 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.549 02:21:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.549 02:21:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.549 ************************************ 00:04:22.549 START TEST event_scheduler 00:04:22.549 ************************************ 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:22.549 * Looking for test storage... 00:04:22.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:22.549 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:22.549 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2183374 00:04:22.549 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:22.549 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.549 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2183374 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2183374 ']' 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.549 [2024-05-15 02:21:09.729952] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:22.549 [2024-05-15 02:21:09.730044] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183374 ] 00:04:22.549 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.549 [2024-05-15 02:21:09.806156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:22.549 [2024-05-15 02:21:09.918377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.549 [2024-05-15 02:21:09.918465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:22.549 [2024-05-15 02:21:09.918407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.549 [2024-05-15 02:21:09.918468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:22.549 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.549 02:21:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.549 POWER: Env isn't set yet! 00:04:22.549 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:22.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:22.549 POWER: Cannot get available frequencies of lcore 0 00:04:22.549 POWER: Attempting to initialise PSTAT power management... 00:04:22.549 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:22.549 POWER: Initialized successfully for lcore 0 power management 00:04:22.807 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:22.807 POWER: Initialized successfully for lcore 1 power management 00:04:22.807 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:22.807 POWER: Initialized successfully for lcore 2 power management 00:04:22.807 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:22.807 POWER: Initialized successfully for lcore 3 power management 00:04:22.807 02:21:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:22.807 02:21:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 [2024-05-15 02:21:10.096155] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:22.807 02:21:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:22.807 02:21:10 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.807 02:21:10 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 ************************************ 00:04:22.807 START TEST scheduler_create_thread 00:04:22.807 ************************************ 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 2 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 3 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 4 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 5 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 6 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.807 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.807 7 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.808 8 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.808 9 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.808 10 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.808 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.066 02:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.439 02:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.439 02:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:24.439 02:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:24.439 02:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.439 02:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.371 02:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.371 00:04:25.371 real 0m2.617s 00:04:25.371 user 0m0.012s 00:04:25.371 sys 0m0.003s 00:04:25.371 02:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.371 02:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.371 ************************************ 00:04:25.371 END TEST scheduler_create_thread 00:04:25.371 ************************************ 00:04:25.371 02:21:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:25.371 02:21:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2183374 00:04:25.371 02:21:12 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2183374 ']' 00:04:25.371 02:21:12 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2183374 00:04:25.371 02:21:12 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:25.371 02:21:12 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:25.371 02:21:12 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2183374 00:04:25.628 02:21:12 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:25.628 02:21:12 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:25.628 02:21:12 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2183374' 00:04:25.628 killing process with pid 2183374 00:04:25.628 02:21:12 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2183374 00:04:25.628 02:21:12 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2183374 00:04:25.887 [2024-05-15 02:21:13.227621] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:26.146 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:26.146 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:26.146 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:26.146 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:26.146 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:26.146 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:26.146 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:26.146 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:26.146 00:04:26.146 real 0m3.868s 00:04:26.146 user 0m5.782s 00:04:26.146 sys 0m0.362s 00:04:26.146 02:21:13 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.146 02:21:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.146 ************************************ 00:04:26.146 END TEST event_scheduler 00:04:26.146 ************************************ 00:04:26.146 02:21:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:26.146 02:21:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:26.146 02:21:13 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.146 02:21:13 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.146 02:21:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.404 ************************************ 00:04:26.404 START TEST app_repeat 00:04:26.404 ************************************ 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2183938 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2183938' 00:04:26.404 Process app_repeat pid: 2183938 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:26.404 spdk_app_start Round 0 00:04:26.404 02:21:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2183938 /var/tmp/spdk-nbd.sock 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2183938 ']' 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:26.404 02:21:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.404 [2024-05-15 02:21:13.592752] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:26.404 [2024-05-15 02:21:13.592819] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183938 ] 00:04:26.404 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.404 [2024-05-15 02:21:13.667206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.404 [2024-05-15 02:21:13.782213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.404 [2024-05-15 02:21:13.782218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.662 02:21:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:26.662 02:21:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:26.662 02:21:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.920 Malloc0 00:04:26.920 02:21:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.178 Malloc1 00:04:27.179 02:21:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.179 02:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:27.436 /dev/nbd0 00:04:27.436 02:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:27.436 02:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.436 1+0 records in 00:04:27.436 1+0 records out 00:04:27.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168223 s, 24.3 MB/s 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:27.436 02:21:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:27.436 02:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.436 02:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.436 02:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.694 /dev/nbd1 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.694 1+0 records in 00:04:27.694 1+0 records out 00:04:27.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236845 s, 17.3 MB/s 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:27.694 02:21:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.694 02:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.952 { 00:04:27.952 "nbd_device": "/dev/nbd0", 00:04:27.952 "bdev_name": "Malloc0" 00:04:27.952 }, 00:04:27.952 { 00:04:27.952 "nbd_device": "/dev/nbd1", 00:04:27.952 "bdev_name": "Malloc1" 00:04:27.952 } 00:04:27.952 ]' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.952 { 00:04:27.952 "nbd_device": "/dev/nbd0", 00:04:27.952 "bdev_name": "Malloc0" 00:04:27.952 }, 00:04:27.952 { 00:04:27.952 "nbd_device": "/dev/nbd1", 00:04:27.952 "bdev_name": "Malloc1" 00:04:27.952 } 00:04:27.952 ]' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.952 /dev/nbd1' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.952 /dev/nbd1' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.952 256+0 records in 00:04:27.952 256+0 records out 00:04:27.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501734 s, 209 MB/s 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.952 256+0 records in 00:04:27.952 256+0 records out 00:04:27.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238466 s, 44.0 MB/s 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.952 256+0 records in 00:04:27.952 256+0 records out 00:04:27.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261833 s, 40.0 MB/s 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.952 02:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.210 02:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:28.467 02:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:28.467 02:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:28.467 02:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:28.467 02:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.468 02:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.726 02:21:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.726 02:21:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.984 02:21:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:29.242 [2024-05-15 02:21:16.633332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.500 [2024-05-15 02:21:16.748254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.500 [2024-05-15 02:21:16.748254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.501 [2024-05-15 02:21:16.810076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:29.501 [2024-05-15 02:21:16.810150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:32.059 02:21:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.059 02:21:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:32.059 spdk_app_start Round 1 00:04:32.059 02:21:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2183938 /var/tmp/spdk-nbd.sock 00:04:32.059 02:21:19 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2183938 ']' 00:04:32.059 02:21:19 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.059 02:21:19 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:32.059 02:21:19 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.059 02:21:19 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:32.059 02:21:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 02:21:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:32.316 02:21:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:32.317 02:21:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.575 Malloc0 00:04:32.575 02:21:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.833 Malloc1 00:04:32.833 02:21:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.833 02:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.091 /dev/nbd0 00:04:33.091 02:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.091 02:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.091 1+0 records in 00:04:33.091 1+0 records out 00:04:33.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172979 s, 23.7 MB/s 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:33.091 02:21:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:33.091 02:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.091 02:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.091 02:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.348 /dev/nbd1 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.348 1+0 records in 00:04:33.348 1+0 records out 00:04:33.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192034 s, 21.3 MB/s 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:33.348 02:21:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.348 02:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.605 { 00:04:33.605 "nbd_device": "/dev/nbd0", 00:04:33.605 "bdev_name": "Malloc0" 00:04:33.605 }, 00:04:33.605 { 00:04:33.605 "nbd_device": "/dev/nbd1", 00:04:33.605 "bdev_name": "Malloc1" 00:04:33.605 } 00:04:33.605 ]' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.605 { 00:04:33.605 "nbd_device": "/dev/nbd0", 00:04:33.605 "bdev_name": "Malloc0" 00:04:33.605 }, 00:04:33.605 { 00:04:33.605 "nbd_device": "/dev/nbd1", 00:04:33.605 "bdev_name": "Malloc1" 00:04:33.605 } 00:04:33.605 ]' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.605 /dev/nbd1' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.605 /dev/nbd1' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.605 256+0 records in 00:04:33.605 256+0 records out 00:04:33.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524455 s, 200 MB/s 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.605 256+0 records in 00:04:33.605 256+0 records out 00:04:33.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228998 s, 45.8 MB/s 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.605 256+0 records in 00:04:33.605 256+0 records out 00:04:33.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242302 s, 43.3 MB/s 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.605 02:21:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.605 02:21:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.605 02:21:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.606 02:21:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.863 02:21:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.864 02:21:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.121 02:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.380 02:21:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.380 02:21:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.638 02:21:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.205 [2024-05-15 02:21:22.316295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.205 [2024-05-15 02:21:22.431614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.205 [2024-05-15 02:21:22.431618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.205 [2024-05-15 02:21:22.490709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.205 [2024-05-15 02:21:22.490794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:37.750 02:21:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.750 02:21:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:37.750 spdk_app_start Round 2 00:04:37.750 02:21:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2183938 /var/tmp/spdk-nbd.sock 00:04:37.750 02:21:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2183938 ']' 00:04:37.750 02:21:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.750 02:21:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:37.750 02:21:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.750 02:21:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:37.750 02:21:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.007 02:21:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:38.007 02:21:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:38.007 02:21:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.265 Malloc0 00:04:38.265 02:21:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.523 Malloc1 00:04:38.523 02:21:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.523 02:21:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.782 /dev/nbd0 00:04:38.782 02:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.782 02:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.782 1+0 records in 00:04:38.782 1+0 records out 00:04:38.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183563 s, 22.3 MB/s 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:38.782 02:21:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:38.782 02:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.782 02:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.782 02:21:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.040 /dev/nbd1 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.040 1+0 records in 00:04:39.040 1+0 records out 00:04:39.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019954 s, 20.5 MB/s 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:39.040 02:21:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.040 02:21:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.299 { 00:04:39.299 "nbd_device": "/dev/nbd0", 00:04:39.299 "bdev_name": "Malloc0" 00:04:39.299 }, 00:04:39.299 { 00:04:39.299 "nbd_device": "/dev/nbd1", 00:04:39.299 "bdev_name": "Malloc1" 00:04:39.299 } 00:04:39.299 ]' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.299 { 00:04:39.299 "nbd_device": "/dev/nbd0", 00:04:39.299 "bdev_name": "Malloc0" 00:04:39.299 }, 00:04:39.299 { 00:04:39.299 "nbd_device": "/dev/nbd1", 00:04:39.299 "bdev_name": "Malloc1" 00:04:39.299 } 00:04:39.299 ]' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.299 /dev/nbd1' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.299 /dev/nbd1' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.299 256+0 records in 00:04:39.299 256+0 records out 00:04:39.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405882 s, 258 MB/s 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.299 256+0 records in 00:04:39.299 256+0 records out 00:04:39.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221953 s, 47.2 MB/s 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.299 256+0 records in 00:04:39.299 256+0 records out 00:04:39.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230704 s, 45.5 MB/s 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.299 02:21:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.557 02:21:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.557 02:21:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.557 02:21:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.558 02:21:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.816 02:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.074 02:21:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.074 02:21:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.332 02:21:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:40.590 [2024-05-15 02:21:28.004357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.848 [2024-05-15 02:21:28.120320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.848 [2024-05-15 02:21:28.120319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.848 [2024-05-15 02:21:28.180124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.848 [2024-05-15 02:21:28.180191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.376 02:21:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2183938 /var/tmp/spdk-nbd.sock 00:04:43.376 02:21:30 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2183938 ']' 00:04:43.376 02:21:30 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.376 02:21:30 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.376 02:21:30 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.377 02:21:30 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.377 02:21:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:43.634 02:21:30 event.app_repeat -- event/event.sh@39 -- # killprocess 2183938 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2183938 ']' 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2183938 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:43.634 02:21:30 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2183938 00:04:43.635 02:21:31 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:43.635 02:21:31 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:43.635 02:21:31 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2183938' 00:04:43.635 killing process with pid 2183938 00:04:43.635 02:21:31 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2183938 00:04:43.635 02:21:31 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2183938 00:04:43.894 spdk_app_start is called in Round 0. 00:04:43.894 Shutdown signal received, stop current app iteration 00:04:43.894 Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 reinitialization... 00:04:43.894 spdk_app_start is called in Round 1. 00:04:43.894 Shutdown signal received, stop current app iteration 00:04:43.894 Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 reinitialization... 00:04:43.894 spdk_app_start is called in Round 2. 00:04:43.894 Shutdown signal received, stop current app iteration 00:04:43.894 Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 reinitialization... 00:04:43.894 spdk_app_start is called in Round 3. 00:04:43.894 Shutdown signal received, stop current app iteration 00:04:43.894 02:21:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:43.894 02:21:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:43.894 00:04:43.894 real 0m17.676s 00:04:43.894 user 0m38.528s 00:04:43.894 sys 0m3.225s 00:04:43.894 02:21:31 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.894 02:21:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.894 ************************************ 00:04:43.894 END TEST app_repeat 00:04:43.894 ************************************ 00:04:43.894 02:21:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:43.894 02:21:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:43.894 02:21:31 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.894 02:21:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.894 02:21:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.894 ************************************ 00:04:43.894 START TEST cpu_locks 00:04:43.894 ************************************ 00:04:43.894 02:21:31 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:44.153 * Looking for test storage... 00:04:44.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:44.153 02:21:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:44.153 02:21:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:44.153 02:21:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:44.153 02:21:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:44.153 02:21:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.153 02:21:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.153 02:21:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.153 ************************************ 00:04:44.153 START TEST default_locks 00:04:44.153 ************************************ 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2186285 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2186285 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2186285 ']' 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.153 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.153 [2024-05-15 02:21:31.433355] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:44.153 [2024-05-15 02:21:31.433430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186285 ] 00:04:44.153 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.153 [2024-05-15 02:21:31.500188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.411 [2024-05-15 02:21:31.607995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.670 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:44.670 02:21:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:44.670 02:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2186285 00:04:44.670 02:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2186285 00:04:44.670 02:21:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.928 lslocks: write error 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2186285 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2186285 ']' 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2186285 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2186285 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2186285' 00:04:44.928 killing process with pid 2186285 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2186285 00:04:44.928 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2186285 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2186285 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2186285 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2186285 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2186285 ']' 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.186 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.187 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.187 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.187 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2186285) - No such process 00:04:45.445 ERROR: process (pid: 2186285) is no longer running 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:45.445 00:04:45.445 real 0m1.221s 00:04:45.445 user 0m1.137s 00:04:45.445 sys 0m0.541s 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.445 02:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.445 ************************************ 00:04:45.445 END TEST default_locks 00:04:45.445 ************************************ 00:04:45.445 02:21:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:45.445 02:21:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.445 02:21:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.445 02:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.445 ************************************ 00:04:45.445 START TEST default_locks_via_rpc 00:04:45.445 ************************************ 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2186452 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2186452 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2186452 ']' 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.445 02:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.445 [2024-05-15 02:21:32.710804] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:45.445 [2024-05-15 02:21:32.710896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186452 ] 00:04:45.445 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.445 [2024-05-15 02:21:32.784418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.704 [2024-05-15 02:21:32.901860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.272 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.561 02:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.561 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2186452 00:04:46.561 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2186452 00:04:46.561 02:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2186452 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2186452 ']' 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2186452 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2186452 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2186452' 00:04:46.819 killing process with pid 2186452 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2186452 00:04:46.819 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2186452 00:04:47.386 00:04:47.386 real 0m1.845s 00:04:47.386 user 0m1.985s 00:04:47.386 sys 0m0.598s 00:04:47.386 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.386 02:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.386 ************************************ 00:04:47.386 END TEST default_locks_via_rpc 00:04:47.386 ************************************ 00:04:47.386 02:21:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:47.386 02:21:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.386 02:21:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.386 02:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.386 ************************************ 00:04:47.386 START TEST non_locking_app_on_locked_coremask 00:04:47.386 ************************************ 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2186747 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2186747 /var/tmp/spdk.sock 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2186747 ']' 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.386 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.387 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.387 02:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.387 [2024-05-15 02:21:34.601981] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:47.387 [2024-05-15 02:21:34.602061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186747 ] 00:04:47.387 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.387 [2024-05-15 02:21:34.675159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.387 [2024-05-15 02:21:34.795417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2186753 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2186753 /var/tmp/spdk2.sock 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2186753 ']' 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.953 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.954 02:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.954 [2024-05-15 02:21:35.111208] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:47.954 [2024-05-15 02:21:35.111334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186753 ] 00:04:47.954 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.954 [2024-05-15 02:21:35.212093] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.954 [2024-05-15 02:21:35.212135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.212 [2024-05-15 02:21:35.446193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.778 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:48.778 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:48.778 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2186747 00:04:48.778 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2186747 00:04:48.778 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.344 lslocks: write error 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2186747 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2186747 ']' 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2186747 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2186747 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2186747' 00:04:49.344 killing process with pid 2186747 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2186747 00:04:49.344 02:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2186747 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2186753 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2186753 ']' 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2186753 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2186753 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2186753' 00:04:50.279 killing process with pid 2186753 00:04:50.279 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2186753 00:04:50.280 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2186753 00:04:50.539 00:04:50.539 real 0m3.393s 00:04:50.539 user 0m3.553s 00:04:50.539 sys 0m1.080s 00:04:50.539 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.539 02:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.539 ************************************ 00:04:50.539 END TEST non_locking_app_on_locked_coremask 00:04:50.539 ************************************ 00:04:50.798 02:21:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.798 02:21:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.798 02:21:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.798 02:21:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 ************************************ 00:04:50.798 START TEST locking_app_on_unlocked_coremask 00:04:50.798 ************************************ 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2187182 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2187182 /var/tmp/spdk.sock 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2187182 ']' 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:50.798 02:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 [2024-05-15 02:21:38.045369] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:50.798 [2024-05-15 02:21:38.045449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187182 ] 00:04:50.798 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.798 [2024-05-15 02:21:38.112656] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.798 [2024-05-15 02:21:38.112700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.058 [2024-05-15 02:21:38.223794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2187195 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2187195 /var/tmp/spdk2.sock 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2187195 ']' 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:51.318 02:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 [2024-05-15 02:21:38.536557] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:51.318 [2024-05-15 02:21:38.536631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187195 ] 00:04:51.318 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.318 [2024-05-15 02:21:38.644749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.577 [2024-05-15 02:21:38.883549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.141 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:52.141 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:52.141 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2187195 00:04:52.141 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2187195 00:04:52.141 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.705 lslocks: write error 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2187182 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2187182 ']' 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2187182 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2187182 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2187182' 00:04:52.705 killing process with pid 2187182 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2187182 00:04:52.705 02:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2187182 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2187195 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2187195 ']' 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2187195 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2187195 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2187195' 00:04:53.640 killing process with pid 2187195 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2187195 00:04:53.640 02:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2187195 00:04:53.898 00:04:53.898 real 0m3.261s 00:04:53.898 user 0m3.339s 00:04:53.898 sys 0m1.069s 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.898 ************************************ 00:04:53.898 END TEST locking_app_on_unlocked_coremask 00:04:53.898 ************************************ 00:04:53.898 02:21:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:53.898 02:21:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.898 02:21:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.898 02:21:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.898 ************************************ 00:04:53.898 START TEST locking_app_on_locked_coremask 00:04:53.898 ************************************ 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2187615 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2187615 /var/tmp/spdk.sock 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2187615 ']' 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:53.898 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.156 [2024-05-15 02:21:41.358216] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:54.156 [2024-05-15 02:21:41.358307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187615 ] 00:04:54.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.156 [2024-05-15 02:21:41.424808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.156 [2024-05-15 02:21:41.536172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2187627 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2187627 /var/tmp/spdk2.sock 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2187627 /var/tmp/spdk2.sock 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2187627 /var/tmp/spdk2.sock 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2187627 ']' 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.414 02:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.672 [2024-05-15 02:21:41.854803] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:54.672 [2024-05-15 02:21:41.854888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187627 ] 00:04:54.672 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.672 [2024-05-15 02:21:41.971581] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2187615 has claimed it. 00:04:54.672 [2024-05-15 02:21:41.971639] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2187627) - No such process 00:04:55.236 ERROR: process (pid: 2187627) is no longer running 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2187615 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2187615 00:04:55.236 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.801 lslocks: write error 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2187615 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2187615 ']' 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2187615 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2187615 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2187615' 00:04:55.801 killing process with pid 2187615 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2187615 00:04:55.801 02:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2187615 00:04:56.058 00:04:56.058 real 0m2.124s 00:04:56.058 user 0m2.260s 00:04:56.058 sys 0m0.686s 00:04:56.058 02:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.058 02:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.058 ************************************ 00:04:56.058 END TEST locking_app_on_locked_coremask 00:04:56.058 ************************************ 00:04:56.058 02:21:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.058 02:21:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.058 02:21:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.058 02:21:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.315 ************************************ 00:04:56.315 START TEST locking_overlapped_coremask 00:04:56.315 ************************************ 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2187841 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2187841 /var/tmp/spdk.sock 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2187841 ']' 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.315 02:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.315 [2024-05-15 02:21:43.544172] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:56.315 [2024-05-15 02:21:43.544264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187841 ] 00:04:56.315 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.315 [2024-05-15 02:21:43.613697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.315 [2024-05-15 02:21:43.724440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.315 [2024-05-15 02:21:43.724504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.315 [2024-05-15 02:21:43.724507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2187934 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2187934 /var/tmp/spdk2.sock 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2187934 /var/tmp/spdk2.sock 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2187934 /var/tmp/spdk2.sock 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2187934 ']' 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.248 02:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.248 [2024-05-15 02:21:44.522727] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:57.248 [2024-05-15 02:21:44.522826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187934 ] 00:04:57.248 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.248 [2024-05-15 02:21:44.629649] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2187841 has claimed it. 00:04:57.248 [2024-05-15 02:21:44.629721] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:57.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2187934) - No such process 00:04:57.813 ERROR: process (pid: 2187934) is no longer running 00:04:57.813 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.813 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:57.813 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:57.813 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.813 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2187841 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2187841 ']' 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2187841 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2187841 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2187841' 00:04:58.071 killing process with pid 2187841 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2187841 00:04:58.071 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2187841 00:04:58.329 00:04:58.329 real 0m2.218s 00:04:58.329 user 0m6.170s 00:04:58.329 sys 0m0.545s 00:04:58.329 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.329 02:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.329 ************************************ 00:04:58.329 END TEST locking_overlapped_coremask 00:04:58.329 ************************************ 00:04:58.329 02:21:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.329 02:21:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.330 02:21:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.330 02:21:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.588 ************************************ 00:04:58.588 START TEST locking_overlapped_coremask_via_rpc 00:04:58.588 ************************************ 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2188185 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2188185 /var/tmp/spdk.sock 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2188185 ']' 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.588 02:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.588 [2024-05-15 02:21:45.815467] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:58.588 [2024-05-15 02:21:45.815565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188185 ] 00:04:58.588 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.588 [2024-05-15 02:21:45.888190] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.588 [2024-05-15 02:21:45.888231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.846 [2024-05-15 02:21:46.008588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.846 [2024-05-15 02:21:46.008655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.846 [2024-05-15 02:21:46.008658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2188240 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2188240 /var/tmp/spdk2.sock 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2188240 ']' 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.411 02:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.411 [2024-05-15 02:21:46.788857] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:04:59.411 [2024-05-15 02:21:46.788971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188240 ] 00:04:59.411 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.669 [2024-05-15 02:21:46.900995] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.669 [2024-05-15 02:21:46.901033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.928 [2024-05-15 02:21:47.119912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.928 [2024-05-15 02:21:47.122986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:59.928 [2024-05-15 02:21:47.122989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.527 [2024-05-15 02:21:47.764029] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2188185 has claimed it. 00:05:00.527 request: 00:05:00.527 { 00:05:00.527 "method": "framework_enable_cpumask_locks", 00:05:00.527 "req_id": 1 00:05:00.527 } 00:05:00.527 Got JSON-RPC error response 00:05:00.527 response: 00:05:00.527 { 00:05:00.527 "code": -32603, 00:05:00.527 "message": "Failed to claim CPU core: 2" 00:05:00.527 } 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2188185 /var/tmp/spdk.sock 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2188185 ']' 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.527 02:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2188240 /var/tmp/spdk2.sock 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2188240 ']' 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.785 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.043 00:05:01.043 real 0m2.521s 00:05:01.043 user 0m1.239s 00:05:01.043 sys 0m0.208s 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.043 02:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 ************************************ 00:05:01.043 END TEST locking_overlapped_coremask_via_rpc 00:05:01.043 ************************************ 00:05:01.043 02:21:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:01.043 02:21:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2188185 ]] 00:05:01.043 02:21:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2188185 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2188185 ']' 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2188185 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2188185 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2188185' 00:05:01.043 killing process with pid 2188185 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2188185 00:05:01.043 02:21:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2188185 00:05:01.609 02:21:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2188240 ]] 00:05:01.609 02:21:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2188240 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2188240 ']' 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2188240 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2188240 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2188240' 00:05:01.609 killing process with pid 2188240 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2188240 00:05:01.609 02:21:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2188240 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2188185 ]] 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2188185 00:05:01.867 02:21:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2188185 ']' 00:05:01.867 02:21:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2188185 00:05:01.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2188185) - No such process 00:05:01.867 02:21:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2188185 is not found' 00:05:01.867 Process with pid 2188185 is not found 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2188240 ]] 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2188240 00:05:01.867 02:21:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2188240 ']' 00:05:01.867 02:21:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2188240 00:05:01.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2188240) - No such process 00:05:01.867 02:21:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2188240 is not found' 00:05:01.867 Process with pid 2188240 is not found 00:05:01.867 02:21:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.126 00:05:02.126 real 0m17.984s 00:05:02.126 user 0m32.272s 00:05:02.126 sys 0m5.661s 00:05:02.126 02:21:49 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.126 02:21:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 ************************************ 00:05:02.126 END TEST cpu_locks 00:05:02.126 ************************************ 00:05:02.126 00:05:02.126 real 0m43.906s 00:05:02.126 user 1m23.444s 00:05:02.126 sys 0m9.756s 00:05:02.126 02:21:49 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.126 02:21:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 ************************************ 00:05:02.126 END TEST event 00:05:02.126 ************************************ 00:05:02.126 02:21:49 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.126 02:21:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.126 02:21:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.126 02:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 ************************************ 00:05:02.126 START TEST thread 00:05:02.126 ************************************ 00:05:02.126 02:21:49 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.126 * Looking for test storage... 00:05:02.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:02.126 02:21:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.126 02:21:49 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:02.126 02:21:49 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.126 02:21:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 ************************************ 00:05:02.126 START TEST thread_poller_perf 00:05:02.126 ************************************ 00:05:02.126 02:21:49 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.126 [2024-05-15 02:21:49.447629] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:02.126 [2024-05-15 02:21:49.447681] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188725 ] 00:05:02.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.126 [2024-05-15 02:21:49.518808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.384 [2024-05-15 02:21:49.635870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.384 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.759 ====================================== 00:05:03.759 busy:2714722044 (cyc) 00:05:03.759 total_run_count: 292000 00:05:03.759 tsc_hz: 2700000000 (cyc) 00:05:03.759 ====================================== 00:05:03.759 poller_cost: 9296 (cyc), 3442 (nsec) 00:05:03.759 00:05:03.759 real 0m1.334s 00:05:03.759 user 0m1.243s 00:05:03.759 sys 0m0.084s 00:05:03.759 02:21:50 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.759 02:21:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 ************************************ 00:05:03.759 END TEST thread_poller_perf 00:05:03.759 ************************************ 00:05:03.759 02:21:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.759 02:21:50 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:03.759 02:21:50 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.759 02:21:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 ************************************ 00:05:03.759 START TEST thread_poller_perf 00:05:03.759 ************************************ 00:05:03.759 02:21:50 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.759 [2024-05-15 02:21:50.843570] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:03.759 [2024-05-15 02:21:50.843636] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188885 ] 00:05:03.759 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.759 [2024-05-15 02:21:50.919206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.759 [2024-05-15 02:21:51.035035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.759 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:05.133 ====================================== 00:05:05.133 busy:2702543312 (cyc) 00:05:05.133 total_run_count: 3849000 00:05:05.133 tsc_hz: 2700000000 (cyc) 00:05:05.133 ====================================== 00:05:05.133 poller_cost: 702 (cyc), 260 (nsec) 00:05:05.133 00:05:05.133 real 0m1.331s 00:05:05.133 user 0m1.227s 00:05:05.133 sys 0m0.098s 00:05:05.133 02:21:52 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.133 02:21:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.133 ************************************ 00:05:05.133 END TEST thread_poller_perf 00:05:05.133 ************************************ 00:05:05.133 02:21:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:05.133 00:05:05.133 real 0m2.830s 00:05:05.133 user 0m2.526s 00:05:05.133 sys 0m0.299s 00:05:05.133 02:21:52 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.133 02:21:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.133 ************************************ 00:05:05.133 END TEST thread 00:05:05.133 ************************************ 00:05:05.133 02:21:52 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.133 02:21:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.133 02:21:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.133 02:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:05.133 ************************************ 00:05:05.133 START TEST accel 00:05:05.133 ************************************ 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.133 * Looking for test storage... 00:05:05.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:05.133 02:21:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:05.133 02:21:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:05.133 02:21:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.133 02:21:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2189080 00:05:05.133 02:21:52 accel -- accel/accel.sh@63 -- # waitforlisten 2189080 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@827 -- # '[' -z 2189080 ']' 00:05:05.133 02:21:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.133 02:21:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:05.133 02:21:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.133 02:21:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:05.133 02:21:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.133 02:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.133 02:21:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.133 02:21:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.133 02:21:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:05.133 02:21:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:05.133 [2024-05-15 02:21:52.340855] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:05.133 [2024-05-15 02:21:52.340963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189080 ] 00:05:05.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.133 [2024-05-15 02:21:52.409754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.133 [2024-05-15 02:21:52.520159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@860 -- # return 0 00:05:06.071 02:21:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:06.071 02:21:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:06.071 02:21:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:06.071 02:21:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:06.071 02:21:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:06.071 02:21:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.071 02:21:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.071 02:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.071 02:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.071 02:21:53 accel -- accel/accel.sh@75 -- # killprocess 2189080 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@946 -- # '[' -z 2189080 ']' 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@950 -- # kill -0 2189080 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@951 -- # uname 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2189080 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2189080' 00:05:06.071 killing process with pid 2189080 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@965 -- # kill 2189080 00:05:06.071 02:21:53 accel -- common/autotest_common.sh@970 -- # wait 2189080 00:05:06.639 02:21:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:06.639 02:21:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:06.639 02:21:53 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:06.639 02:21:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.639 02:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.639 02:21:53 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:06.639 02:21:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:06.639 02:21:53 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.639 02:21:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:06.639 02:21:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:06.639 02:21:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:06.639 02:21:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.639 02:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.639 ************************************ 00:05:06.639 START TEST accel_missing_filename 00:05:06.639 ************************************ 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.639 02:21:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:06.639 02:21:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:06.639 [2024-05-15 02:21:53.916439] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:06.639 [2024-05-15 02:21:53.916502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189379 ] 00:05:06.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.639 [2024-05-15 02:21:53.991882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.897 [2024-05-15 02:21:54.111279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.897 [2024-05-15 02:21:54.173256] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.897 [2024-05-15 02:21:54.258085] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:07.156 A filename is required. 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.156 00:05:07.156 real 0m0.484s 00:05:07.156 user 0m0.353s 00:05:07.156 sys 0m0.163s 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.156 02:21:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:07.156 ************************************ 00:05:07.156 END TEST accel_missing_filename 00:05:07.156 ************************************ 00:05:07.156 02:21:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.156 02:21:54 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:07.156 02:21:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.156 02:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.156 ************************************ 00:05:07.156 START TEST accel_compress_verify 00:05:07.156 ************************************ 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.156 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:07.156 02:21:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:07.156 [2024-05-15 02:21:54.453993] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:07.156 [2024-05-15 02:21:54.454049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189401 ] 00:05:07.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.156 [2024-05-15 02:21:54.526728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.413 [2024-05-15 02:21:54.646052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.413 [2024-05-15 02:21:54.707253] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.414 [2024-05-15 02:21:54.785832] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:07.672 00:05:07.672 Compression does not support the verify option, aborting. 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.672 00:05:07.672 real 0m0.473s 00:05:07.672 user 0m0.356s 00:05:07.672 sys 0m0.150s 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.672 02:21:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:07.672 ************************************ 00:05:07.672 END TEST accel_compress_verify 00:05:07.672 ************************************ 00:05:07.672 02:21:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:07.672 02:21:54 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:07.672 02:21:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.672 02:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.672 ************************************ 00:05:07.672 START TEST accel_wrong_workload 00:05:07.672 ************************************ 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:07.672 02:21:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:07.672 Unsupported workload type: foobar 00:05:07.672 [2024-05-15 02:21:54.975141] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:07.672 accel_perf options: 00:05:07.672 [-h help message] 00:05:07.672 [-q queue depth per core] 00:05:07.672 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.672 [-T number of threads per core 00:05:07.672 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.672 [-t time in seconds] 00:05:07.672 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.672 [ dif_verify, , dif_generate, dif_generate_copy 00:05:07.672 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.672 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.672 [-S for crc32c workload, use this seed value (default 0) 00:05:07.672 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.672 [-f for fill workload, use this BYTE value (default 255) 00:05:07.672 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.672 [-y verify result if this switch is on] 00:05:07.672 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.672 Can be used to spread operations across a wider range of memory. 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.672 00:05:07.672 real 0m0.022s 00:05:07.672 user 0m0.014s 00:05:07.672 sys 0m0.008s 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.672 02:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:07.672 ************************************ 00:05:07.672 END TEST accel_wrong_workload 00:05:07.672 ************************************ 00:05:07.672 Error: writing output failed: Broken pipe 00:05:07.672 02:21:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.672 02:21:54 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:07.672 02:21:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.672 02:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.672 ************************************ 00:05:07.672 START TEST accel_negative_buffers 00:05:07.672 ************************************ 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.672 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:07.672 02:21:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:07.672 -x option must be non-negative. 00:05:07.672 [2024-05-15 02:21:55.043126] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:07.672 accel_perf options: 00:05:07.672 [-h help message] 00:05:07.672 [-q queue depth per core] 00:05:07.672 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.672 [-T number of threads per core 00:05:07.672 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.672 [-t time in seconds] 00:05:07.672 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.672 [ dif_verify, , dif_generate, dif_generate_copy 00:05:07.672 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.672 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.672 [-S for crc32c workload, use this seed value (default 0) 00:05:07.673 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.673 [-f for fill workload, use this BYTE value (default 255) 00:05:07.673 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.673 [-y verify result if this switch is on] 00:05:07.673 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.673 Can be used to spread operations across a wider range of memory. 00:05:07.673 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:07.673 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.673 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.673 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.673 00:05:07.673 real 0m0.021s 00:05:07.673 user 0m0.010s 00:05:07.673 sys 0m0.011s 00:05:07.673 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.673 02:21:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:07.673 ************************************ 00:05:07.673 END TEST accel_negative_buffers 00:05:07.673 ************************************ 00:05:07.673 02:21:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:07.673 Error: writing output failed: Broken pipe 00:05:07.673 02:21:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:07.673 02:21:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.673 02:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.930 ************************************ 00:05:07.930 START TEST accel_crc32c 00:05:07.930 ************************************ 00:05:07.930 02:21:55 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:07.930 02:21:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:07.930 02:21:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:07.931 02:21:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:07.931 [2024-05-15 02:21:55.109099] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:07.931 [2024-05-15 02:21:55.109161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189591 ] 00:05:07.931 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.931 [2024-05-15 02:21:55.183604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.931 [2024-05-15 02:21:55.304421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.195 02:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:09.564 02:21:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.564 00:05:09.564 real 0m1.493s 00:05:09.564 user 0m1.341s 00:05:09.564 sys 0m0.155s 00:05:09.564 02:21:56 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.564 02:21:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:09.564 ************************************ 00:05:09.564 END TEST accel_crc32c 00:05:09.564 ************************************ 00:05:09.564 02:21:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:09.564 02:21:56 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:09.564 02:21:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.564 02:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.564 ************************************ 00:05:09.564 START TEST accel_crc32c_C2 00:05:09.564 ************************************ 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:09.564 [2024-05-15 02:21:56.655068] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:09.564 [2024-05-15 02:21:56.655130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189744 ] 00:05:09.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.564 [2024-05-15 02:21:56.732179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.564 [2024-05-15 02:21:56.849873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.564 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.565 02:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.935 00:05:10.935 real 0m1.485s 00:05:10.935 user 0m1.326s 00:05:10.935 sys 0m0.160s 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.935 02:21:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:10.935 ************************************ 00:05:10.935 END TEST accel_crc32c_C2 00:05:10.935 ************************************ 00:05:10.935 02:21:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:10.935 02:21:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:10.935 02:21:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.935 02:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.935 ************************************ 00:05:10.935 START TEST accel_copy 00:05:10.935 ************************************ 00:05:10.935 02:21:58 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:10.935 02:21:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:10.935 [2024-05-15 02:21:58.189075] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:10.935 [2024-05-15 02:21:58.189133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189941 ] 00:05:10.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.935 [2024-05-15 02:21:58.261577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.193 [2024-05-15 02:21:58.382331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.193 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.194 02:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:12.569 02:21:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.569 00:05:12.569 real 0m1.482s 00:05:12.569 user 0m1.323s 00:05:12.569 sys 0m0.160s 00:05:12.569 02:21:59 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.569 02:21:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.569 ************************************ 00:05:12.569 END TEST accel_copy 00:05:12.569 ************************************ 00:05:12.569 02:21:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.569 02:21:59 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:12.569 02:21:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.569 02:21:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.569 ************************************ 00:05:12.569 START TEST accel_fill 00:05:12.569 ************************************ 00:05:12.569 02:21:59 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:12.569 [2024-05-15 02:21:59.721440] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:12.569 [2024-05-15 02:21:59.721506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190184 ] 00:05:12.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.569 [2024-05-15 02:21:59.794320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.569 [2024-05-15 02:21:59.913109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.569 02:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:13.942 02:22:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.942 00:05:13.942 real 0m1.483s 00:05:13.942 user 0m1.334s 00:05:13.942 sys 0m0.151s 00:05:13.942 02:22:01 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.942 02:22:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:13.942 ************************************ 00:05:13.942 END TEST accel_fill 00:05:13.942 ************************************ 00:05:13.942 02:22:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:13.942 02:22:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:13.942 02:22:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.942 02:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.942 ************************************ 00:05:13.942 START TEST accel_copy_crc32c 00:05:13.942 ************************************ 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:13.942 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:13.942 [2024-05-15 02:22:01.253886] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:13.942 [2024-05-15 02:22:01.253964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190337 ] 00:05:13.942 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.942 [2024-05-15 02:22:01.325875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.201 [2024-05-15 02:22:01.445173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.201 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.201 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.201 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.201 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.201 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.201 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.202 02:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.618 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.619 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.619 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.619 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:15.619 02:22:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.619 00:05:15.619 real 0m1.476s 00:05:15.619 user 0m1.332s 00:05:15.619 sys 0m0.146s 00:05:15.619 02:22:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.619 02:22:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:15.619 ************************************ 00:05:15.619 END TEST accel_copy_crc32c 00:05:15.619 ************************************ 00:05:15.619 02:22:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.619 02:22:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:15.619 02:22:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.619 02:22:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.619 ************************************ 00:05:15.619 START TEST accel_copy_crc32c_C2 00:05:15.619 ************************************ 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.619 02:22:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:15.619 [2024-05-15 02:22:02.781797] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:15.619 [2024-05-15 02:22:02.781862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190567 ] 00:05:15.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.619 [2024-05-15 02:22:02.859569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.619 [2024-05-15 02:22:02.978285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.878 02:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.251 00:05:17.251 real 0m1.483s 00:05:17.251 user 0m1.331s 00:05:17.251 sys 0m0.154s 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.251 02:22:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 ************************************ 00:05:17.251 END TEST accel_copy_crc32c_C2 00:05:17.251 ************************************ 00:05:17.251 02:22:04 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:17.251 02:22:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:17.251 02:22:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.251 02:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 ************************************ 00:05:17.251 START TEST accel_dualcast 00:05:17.251 ************************************ 00:05:17.251 02:22:04 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:17.251 [2024-05-15 02:22:04.315441] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:17.251 [2024-05-15 02:22:04.315505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190775 ] 00:05:17.251 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.251 [2024-05-15 02:22:04.394372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.251 [2024-05-15 02:22:04.513646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.251 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.252 02:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:18.624 02:22:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.624 00:05:18.624 real 0m1.489s 00:05:18.624 user 0m1.330s 00:05:18.624 sys 0m0.161s 00:05:18.624 02:22:05 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.624 02:22:05 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:18.624 ************************************ 00:05:18.624 END TEST accel_dualcast 00:05:18.624 ************************************ 00:05:18.624 02:22:05 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:18.624 02:22:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:18.624 02:22:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.624 02:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.624 ************************************ 00:05:18.624 START TEST accel_compare 00:05:18.624 ************************************ 00:05:18.624 02:22:05 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:18.624 02:22:05 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:18.624 [2024-05-15 02:22:05.857302] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:18.624 [2024-05-15 02:22:05.857365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190935 ] 00:05:18.624 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.624 [2024-05-15 02:22:05.930540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.882 [2024-05-15 02:22:06.048553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.882 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.883 02:22:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:20.255 02:22:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.255 00:05:20.255 real 0m1.482s 00:05:20.255 user 0m1.331s 00:05:20.255 sys 0m0.153s 00:05:20.255 02:22:07 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.255 02:22:07 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:20.255 ************************************ 00:05:20.255 END TEST accel_compare 00:05:20.255 ************************************ 00:05:20.255 02:22:07 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:20.255 02:22:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:20.255 02:22:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.255 02:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.255 ************************************ 00:05:20.255 START TEST accel_xor 00:05:20.255 ************************************ 00:05:20.255 02:22:07 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:20.255 [2024-05-15 02:22:07.390730] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:20.255 [2024-05-15 02:22:07.390794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191205 ] 00:05:20.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.255 [2024-05-15 02:22:07.462816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.255 [2024-05-15 02:22:07.585484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.255 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.256 02:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.629 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.630 00:05:21.630 real 0m1.489s 00:05:21.630 user 0m1.335s 00:05:21.630 sys 0m0.156s 00:05:21.630 02:22:08 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.630 02:22:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:21.630 ************************************ 00:05:21.630 END TEST accel_xor 00:05:21.630 ************************************ 00:05:21.630 02:22:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:21.630 02:22:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:21.630 02:22:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.630 02:22:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.630 ************************************ 00:05:21.630 START TEST accel_xor 00:05:21.630 ************************************ 00:05:21.630 02:22:08 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:21.630 02:22:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:21.630 [2024-05-15 02:22:08.931358] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:21.630 [2024-05-15 02:22:08.931422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191370 ] 00:05:21.630 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.630 [2024-05-15 02:22:09.004643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.888 [2024-05-15 02:22:09.128183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.888 02:22:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:23.261 02:22:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.261 00:05:23.261 real 0m1.500s 00:05:23.261 user 0m1.349s 00:05:23.261 sys 0m0.153s 00:05:23.261 02:22:10 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.261 02:22:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:23.261 ************************************ 00:05:23.261 END TEST accel_xor 00:05:23.261 ************************************ 00:05:23.261 02:22:10 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:23.261 02:22:10 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:23.261 02:22:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.261 02:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.261 ************************************ 00:05:23.261 START TEST accel_dif_verify 00:05:23.261 ************************************ 00:05:23.261 02:22:10 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:23.261 02:22:10 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:23.261 [2024-05-15 02:22:10.483083] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:23.261 [2024-05-15 02:22:10.483149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191521 ] 00:05:23.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.261 [2024-05-15 02:22:10.556831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.519 [2024-05-15 02:22:10.677789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.519 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.520 02:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.891 02:22:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.891 02:22:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.891 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.891 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:24.892 02:22:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.892 00:05:24.892 real 0m1.487s 00:05:24.892 user 0m1.341s 00:05:24.892 sys 0m0.150s 00:05:24.892 02:22:11 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.892 02:22:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:24.892 ************************************ 00:05:24.892 END TEST accel_dif_verify 00:05:24.892 ************************************ 00:05:24.892 02:22:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:24.892 02:22:11 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:24.892 02:22:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.892 02:22:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.892 ************************************ 00:05:24.892 START TEST accel_dif_generate 00:05:24.892 ************************************ 00:05:24.892 02:22:12 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:24.892 [2024-05-15 02:22:12.019027] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:24.892 [2024-05-15 02:22:12.019091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191801 ] 00:05:24.892 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.892 [2024-05-15 02:22:12.092486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.892 [2024-05-15 02:22:12.216725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.892 02:22:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:26.264 02:22:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.264 00:05:26.264 real 0m1.493s 00:05:26.265 user 0m1.341s 00:05:26.265 sys 0m0.155s 00:05:26.265 02:22:13 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.265 02:22:13 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 ************************************ 00:05:26.265 END TEST accel_dif_generate 00:05:26.265 ************************************ 00:05:26.265 02:22:13 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:26.265 02:22:13 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:26.265 02:22:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.265 02:22:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 ************************************ 00:05:26.265 START TEST accel_dif_generate_copy 00:05:26.265 ************************************ 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:26.265 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:26.265 [2024-05-15 02:22:13.564778] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:26.265 [2024-05-15 02:22:13.564843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191957 ] 00:05:26.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.265 [2024-05-15 02:22:13.638636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.523 [2024-05-15 02:22:13.762409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.523 02:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.895 00:05:27.895 real 0m1.501s 00:05:27.895 user 0m1.346s 00:05:27.895 sys 0m0.157s 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.895 02:22:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:27.895 ************************************ 00:05:27.895 END TEST accel_dif_generate_copy 00:05:27.895 ************************************ 00:05:27.895 02:22:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:27.895 02:22:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.895 02:22:15 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:27.895 02:22:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.895 02:22:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.895 ************************************ 00:05:27.895 START TEST accel_comp 00:05:27.895 ************************************ 00:05:27.895 02:22:15 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:27.895 02:22:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:27.895 [2024-05-15 02:22:15.116047] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:27.895 [2024-05-15 02:22:15.116113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192116 ] 00:05:27.895 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.895 [2024-05-15 02:22:15.189949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.153 [2024-05-15 02:22:15.310897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.153 02:22:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:29.527 02:22:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.527 00:05:29.527 real 0m1.494s 00:05:29.527 user 0m1.338s 00:05:29.527 sys 0m0.159s 00:05:29.527 02:22:16 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.527 02:22:16 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:29.527 ************************************ 00:05:29.527 END TEST accel_comp 00:05:29.527 ************************************ 00:05:29.527 02:22:16 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.527 02:22:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:29.527 02:22:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.527 02:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.527 ************************************ 00:05:29.527 START TEST accel_decomp 00:05:29.527 ************************************ 00:05:29.527 02:22:16 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:29.527 02:22:16 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:29.527 [2024-05-15 02:22:16.660554] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:29.528 [2024-05-15 02:22:16.660619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192387 ] 00:05:29.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.528 [2024-05-15 02:22:16.734123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.528 [2024-05-15 02:22:16.856698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.528 02:22:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:30.948 02:22:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.949 00:05:30.949 real 0m1.501s 00:05:30.949 user 0m1.348s 00:05:30.949 sys 0m0.156s 00:05:30.949 02:22:18 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.949 02:22:18 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:30.949 ************************************ 00:05:30.949 END TEST accel_decomp 00:05:30.949 ************************************ 00:05:30.949 02:22:18 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.949 02:22:18 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:30.949 02:22:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.949 02:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.949 ************************************ 00:05:30.949 START TEST accel_decmop_full 00:05:30.949 ************************************ 00:05:30.949 02:22:18 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:30.949 02:22:18 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:30.949 [2024-05-15 02:22:18.208918] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:30.949 [2024-05-15 02:22:18.209011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192550 ] 00:05:30.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.949 [2024-05-15 02:22:18.286919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.207 [2024-05-15 02:22:18.412039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.207 02:22:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:32.580 02:22:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.580 00:05:32.580 real 0m1.521s 00:05:32.580 user 0m1.366s 00:05:32.580 sys 0m0.158s 00:05:32.580 02:22:19 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.580 02:22:19 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:32.580 ************************************ 00:05:32.580 END TEST accel_decmop_full 00:05:32.580 ************************************ 00:05:32.581 02:22:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.581 02:22:19 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:32.581 02:22:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.581 02:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.581 ************************************ 00:05:32.581 START TEST accel_decomp_mcore 00:05:32.581 ************************************ 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:32.581 02:22:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:32.581 [2024-05-15 02:22:19.781360] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:32.581 [2024-05-15 02:22:19.781435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192717 ] 00:05:32.581 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.581 [2024-05-15 02:22:19.857882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.581 [2024-05-15 02:22:19.985065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.581 [2024-05-15 02:22:19.985119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.581 [2024-05-15 02:22:19.985170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.581 [2024-05-15 02:22:19.985173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.839 02:22:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.219 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.219 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.219 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.219 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.219 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.219 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.220 00:05:34.220 real 0m1.515s 00:05:34.220 user 0m4.822s 00:05:34.220 sys 0m0.166s 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.220 02:22:21 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:34.220 ************************************ 00:05:34.220 END TEST accel_decomp_mcore 00:05:34.220 ************************************ 00:05:34.220 02:22:21 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.220 02:22:21 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:34.220 02:22:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.220 02:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.220 ************************************ 00:05:34.220 START TEST accel_decomp_full_mcore 00:05:34.220 ************************************ 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:34.220 [2024-05-15 02:22:21.351449] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:34.220 [2024-05-15 02:22:21.351513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192988 ] 00:05:34.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.220 [2024-05-15 02:22:21.430586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.220 [2024-05-15 02:22:21.556843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.220 [2024-05-15 02:22:21.556894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.220 [2024-05-15 02:22:21.556958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.220 [2024-05-15 02:22:21.556963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.220 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 02:22:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.594 00:05:35.594 real 0m1.530s 00:05:35.594 user 0m4.880s 00:05:35.594 sys 0m0.170s 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.594 02:22:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:35.594 ************************************ 00:05:35.594 END TEST accel_decomp_full_mcore 00:05:35.594 ************************************ 00:05:35.594 02:22:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.594 02:22:22 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:35.594 02:22:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.594 02:22:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.594 ************************************ 00:05:35.594 START TEST accel_decomp_mthread 00:05:35.594 ************************************ 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:35.594 02:22:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:35.594 [2024-05-15 02:22:22.934286] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:35.594 [2024-05-15 02:22:22.934350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193146 ] 00:05:35.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.594 [2024-05-15 02:22:23.007847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.852 [2024-05-15 02:22:23.130823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.852 02:22:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.224 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.225 00:05:37.225 real 0m1.490s 00:05:37.225 user 0m1.332s 00:05:37.225 sys 0m0.160s 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.225 02:22:24 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:37.225 ************************************ 00:05:37.225 END TEST accel_decomp_mthread 00:05:37.225 ************************************ 00:05:37.225 02:22:24 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.225 02:22:24 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:37.225 02:22:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.225 02:22:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.225 ************************************ 00:05:37.225 START TEST accel_decomp_full_mthread 00:05:37.225 ************************************ 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:37.225 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:37.225 [2024-05-15 02:22:24.469291] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:37.225 [2024-05-15 02:22:24.469349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193565 ] 00:05:37.225 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.225 [2024-05-15 02:22:24.544870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.483 [2024-05-15 02:22:24.668876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.483 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.484 02:22:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.856 00:05:38.856 real 0m1.539s 00:05:38.856 user 0m1.382s 00:05:38.856 sys 0m0.159s 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.856 02:22:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:38.856 ************************************ 00:05:38.856 END TEST accel_decomp_full_mthread 00:05:38.856 ************************************ 00:05:38.856 02:22:26 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:38.856 02:22:26 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.856 02:22:26 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:38.856 02:22:26 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:38.856 02:22:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.856 02:22:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.856 02:22:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.856 02:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.856 02:22:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.856 02:22:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.856 02:22:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.856 02:22:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:38.856 02:22:26 accel -- accel/accel.sh@41 -- # jq -r . 00:05:38.856 ************************************ 00:05:38.856 START TEST accel_dif_functional_tests 00:05:38.856 ************************************ 00:05:38.856 02:22:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.856 [2024-05-15 02:22:26.083657] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:38.856 [2024-05-15 02:22:26.083720] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193835 ] 00:05:38.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.856 [2024-05-15 02:22:26.155352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.115 [2024-05-15 02:22:26.281073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.115 [2024-05-15 02:22:26.281097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.115 [2024-05-15 02:22:26.281101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.115 00:05:39.115 00:05:39.115 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.115 http://cunit.sourceforge.net/ 00:05:39.115 00:05:39.115 00:05:39.115 Suite: accel_dif 00:05:39.115 Test: verify: DIF generated, GUARD check ...passed 00:05:39.115 Test: verify: DIF generated, APPTAG check ...passed 00:05:39.115 Test: verify: DIF generated, REFTAG check ...passed 00:05:39.115 Test: verify: DIF not generated, GUARD check ...[2024-05-15 02:22:26.381529] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:39.115 [2024-05-15 02:22:26.381595] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:39.115 passed 00:05:39.115 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 02:22:26.381638] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:39.115 [2024-05-15 02:22:26.381669] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:39.115 passed 00:05:39.115 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 02:22:26.381705] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:39.115 [2024-05-15 02:22:26.381735] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:39.115 passed 00:05:39.115 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:39.115 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 02:22:26.381809] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:39.115 passed 00:05:39.115 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:39.115 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:39.115 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:39.115 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 02:22:26.381975] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:39.115 passed 00:05:39.115 Test: generate copy: DIF generated, GUARD check ...passed 00:05:39.115 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:39.115 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:39.115 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:39.115 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:39.115 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:39.115 Test: generate copy: iovecs-len validate ...[2024-05-15 02:22:26.382240] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:39.115 passed 00:05:39.115 Test: generate copy: buffer alignment validate ...passed 00:05:39.115 00:05:39.115 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.115 suites 1 1 n/a 0 0 00:05:39.115 tests 20 20 20 0 0 00:05:39.115 asserts 204 204 204 0 n/a 00:05:39.115 00:05:39.115 Elapsed time = 0.003 seconds 00:05:39.376 00:05:39.376 real 0m0.596s 00:05:39.376 user 0m0.872s 00:05:39.376 sys 0m0.189s 00:05:39.376 02:22:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.376 02:22:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:39.376 ************************************ 00:05:39.376 END TEST accel_dif_functional_tests 00:05:39.376 ************************************ 00:05:39.376 00:05:39.376 real 0m34.423s 00:05:39.376 user 0m37.697s 00:05:39.376 sys 0m4.958s 00:05:39.376 02:22:26 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.376 02:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.376 ************************************ 00:05:39.376 END TEST accel 00:05:39.376 ************************************ 00:05:39.376 02:22:26 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:39.376 02:22:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.376 02:22:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.376 02:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:39.376 ************************************ 00:05:39.376 START TEST accel_rpc 00:05:39.376 ************************************ 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:39.376 * Looking for test storage... 00:05:39.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:39.376 02:22:26 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.376 02:22:26 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2194023 00:05:39.376 02:22:26 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:39.376 02:22:26 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2194023 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2194023 ']' 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.376 02:22:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.634 [2024-05-15 02:22:26.804838] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:39.634 [2024-05-15 02:22:26.804945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194023 ] 00:05:39.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.634 [2024-05-15 02:22:26.870974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.634 [2024-05-15 02:22:26.976549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.568 02:22:27 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.568 02:22:27 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:40.568 02:22:27 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:40.568 02:22:27 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:40.568 02:22:27 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:40.568 02:22:27 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:40.568 02:22:27 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:40.568 02:22:27 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.568 02:22:27 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.568 02:22:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.568 ************************************ 00:05:40.568 START TEST accel_assign_opcode 00:05:40.568 ************************************ 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:40.568 [2024-05-15 02:22:27.767053] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:40.568 [2024-05-15 02:22:27.775062] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.568 02:22:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.827 software 00:05:40.827 00:05:40.827 real 0m0.303s 00:05:40.827 user 0m0.041s 00:05:40.827 sys 0m0.008s 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.827 02:22:28 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:40.827 ************************************ 00:05:40.827 END TEST accel_assign_opcode 00:05:40.827 ************************************ 00:05:40.827 02:22:28 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2194023 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2194023 ']' 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2194023 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2194023 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2194023' 00:05:40.827 killing process with pid 2194023 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@965 -- # kill 2194023 00:05:40.827 02:22:28 accel_rpc -- common/autotest_common.sh@970 -- # wait 2194023 00:05:41.394 00:05:41.394 real 0m1.891s 00:05:41.394 user 0m2.027s 00:05:41.394 sys 0m0.448s 00:05:41.394 02:22:28 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.394 02:22:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.394 ************************************ 00:05:41.394 END TEST accel_rpc 00:05:41.394 ************************************ 00:05:41.394 02:22:28 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:41.394 02:22:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.394 02:22:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.394 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.394 ************************************ 00:05:41.394 START TEST app_cmdline 00:05:41.394 ************************************ 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:41.394 * Looking for test storage... 00:05:41.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.394 02:22:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:41.394 02:22:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2194241 00:05:41.394 02:22:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:41.394 02:22:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2194241 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2194241 ']' 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.394 02:22:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.394 [2024-05-15 02:22:28.766367] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:05:41.394 [2024-05-15 02:22:28.766462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194241 ] 00:05:41.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.653 [2024-05-15 02:22:28.846882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.653 [2024-05-15 02:22:28.969372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.911 02:22:29 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.911 02:22:29 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:41.911 02:22:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:42.169 { 00:05:42.169 "version": "SPDK v24.05-pre git sha1 0ed7af446", 00:05:42.169 "fields": { 00:05:42.169 "major": 24, 00:05:42.169 "minor": 5, 00:05:42.169 "patch": 0, 00:05:42.169 "suffix": "-pre", 00:05:42.169 "commit": "0ed7af446" 00:05:42.169 } 00:05:42.169 } 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:42.169 02:22:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:42.169 02:22:29 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:42.735 request: 00:05:42.735 { 00:05:42.735 "method": "env_dpdk_get_mem_stats", 00:05:42.735 "req_id": 1 00:05:42.735 } 00:05:42.735 Got JSON-RPC error response 00:05:42.735 response: 00:05:42.735 { 00:05:42.735 "code": -32601, 00:05:42.735 "message": "Method not found" 00:05:42.735 } 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.735 02:22:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2194241 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2194241 ']' 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2194241 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2194241 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2194241' 00:05:42.735 killing process with pid 2194241 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@965 -- # kill 2194241 00:05:42.735 02:22:29 app_cmdline -- common/autotest_common.sh@970 -- # wait 2194241 00:05:42.995 00:05:42.995 real 0m1.701s 00:05:42.995 user 0m2.110s 00:05:42.995 sys 0m0.500s 00:05:42.995 02:22:30 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.995 02:22:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:42.995 ************************************ 00:05:42.995 END TEST app_cmdline 00:05:42.995 ************************************ 00:05:42.995 02:22:30 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:42.995 02:22:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.995 02:22:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.995 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.254 ************************************ 00:05:43.254 START TEST version 00:05:43.254 ************************************ 00:05:43.254 02:22:30 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:43.254 * Looking for test storage... 00:05:43.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:43.254 02:22:30 version -- app/version.sh@17 -- # get_header_version major 00:05:43.254 02:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # cut -f2 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.254 02:22:30 version -- app/version.sh@17 -- # major=24 00:05:43.254 02:22:30 version -- app/version.sh@18 -- # get_header_version minor 00:05:43.254 02:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # cut -f2 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.254 02:22:30 version -- app/version.sh@18 -- # minor=5 00:05:43.254 02:22:30 version -- app/version.sh@19 -- # get_header_version patch 00:05:43.254 02:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # cut -f2 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.254 02:22:30 version -- app/version.sh@19 -- # patch=0 00:05:43.254 02:22:30 version -- app/version.sh@20 -- # get_header_version suffix 00:05:43.254 02:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # cut -f2 00:05:43.254 02:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:43.254 02:22:30 version -- app/version.sh@20 -- # suffix=-pre 00:05:43.254 02:22:30 version -- app/version.sh@22 -- # version=24.5 00:05:43.254 02:22:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:43.254 02:22:30 version -- app/version.sh@28 -- # version=24.5rc0 00:05:43.254 02:22:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:43.254 02:22:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:43.254 02:22:30 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:43.254 02:22:30 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:43.254 00:05:43.254 real 0m0.107s 00:05:43.254 user 0m0.052s 00:05:43.254 sys 0m0.077s 00:05:43.254 02:22:30 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.254 02:22:30 version -- common/autotest_common.sh@10 -- # set +x 00:05:43.254 ************************************ 00:05:43.254 END TEST version 00:05:43.254 ************************************ 00:05:43.254 02:22:30 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@194 -- # uname -s 00:05:43.254 02:22:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:43.254 02:22:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:43.254 02:22:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:43.254 02:22:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:43.254 02:22:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.254 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.254 02:22:30 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:05:43.254 02:22:30 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:05:43.254 02:22:30 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:43.254 02:22:30 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:43.254 02:22:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.254 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.254 ************************************ 00:05:43.254 START TEST nvmf_tcp 00:05:43.254 ************************************ 00:05:43.254 02:22:30 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:43.254 * Looking for test storage... 00:05:43.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.254 02:22:30 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.254 02:22:30 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.254 02:22:30 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.254 02:22:30 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.254 02:22:30 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.254 02:22:30 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.254 02:22:30 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.254 02:22:30 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:43.255 02:22:30 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:43.255 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:43.255 02:22:30 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:43.255 02:22:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.514 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:43.514 02:22:30 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:43.514 02:22:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:43.514 02:22:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.514 02:22:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.514 ************************************ 00:05:43.514 START TEST nvmf_example 00:05:43.514 ************************************ 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:43.514 * Looking for test storage... 00:05:43.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.514 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:43.515 02:22:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:46.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:46.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.070 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:46.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:46.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:46.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:05:46.071 00:05:46.071 --- 10.0.0.2 ping statistics --- 00:05:46.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.071 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:05:46.071 00:05:46.071 --- 10.0.0.1 ping statistics --- 00:05:46.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.071 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2196614 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2196614 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2196614 ']' 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.071 02:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:46.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.003 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:47.261 02:22:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:47.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.236 Initializing NVMe Controllers 00:05:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:57.236 Initialization complete. Launching workers. 00:05:57.236 ======================================================== 00:05:57.236 Latency(us) 00:05:57.236 Device Information : IOPS MiB/s Average min max 00:05:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14840.50 57.97 4313.15 898.03 15252.00 00:05:57.237 ======================================================== 00:05:57.237 Total : 14840.50 57.97 4313.15 898.03 15252.00 00:05:57.237 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:57.237 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:57.237 rmmod nvme_tcp 00:05:57.237 rmmod nvme_fabrics 00:05:57.494 rmmod nvme_keyring 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2196614 ']' 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2196614 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2196614 ']' 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2196614 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2196614 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2196614' 00:05:57.494 killing process with pid 2196614 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2196614 00:05:57.494 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2196614 00:05:57.751 nvmf threads initialize successfully 00:05:57.751 bdev subsystem init successfully 00:05:57.751 created a nvmf target service 00:05:57.751 create targets's poll groups done 00:05:57.751 all subsystems of target started 00:05:57.751 nvmf target is running 00:05:57.751 all subsystems of target stopped 00:05:57.751 destroy targets's poll groups done 00:05:57.751 destroyed the nvmf target service 00:05:57.751 bdev subsystem finish successfully 00:05:57.751 nvmf threads destroy successfully 00:05:57.751 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:57.752 02:22:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.653 02:22:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:59.653 02:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:59.653 02:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.653 02:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:59.653 00:05:59.653 real 0m16.322s 00:05:59.653 user 0m44.245s 00:05:59.653 sys 0m3.988s 00:05:59.653 02:22:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.653 02:22:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:59.653 ************************************ 00:05:59.653 END TEST nvmf_example 00:05:59.653 ************************************ 00:05:59.653 02:22:47 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:59.653 02:22:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:59.653 02:22:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.653 02:22:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.914 ************************************ 00:05:59.914 START TEST nvmf_filesystem 00:05:59.914 ************************************ 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:59.914 * Looking for test storage... 00:05:59.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:59.914 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:59.915 #define SPDK_CONFIG_H 00:05:59.915 #define SPDK_CONFIG_APPS 1 00:05:59.915 #define SPDK_CONFIG_ARCH native 00:05:59.915 #undef SPDK_CONFIG_ASAN 00:05:59.915 #undef SPDK_CONFIG_AVAHI 00:05:59.915 #undef SPDK_CONFIG_CET 00:05:59.915 #define SPDK_CONFIG_COVERAGE 1 00:05:59.915 #define SPDK_CONFIG_CROSS_PREFIX 00:05:59.915 #undef SPDK_CONFIG_CRYPTO 00:05:59.915 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:59.915 #undef SPDK_CONFIG_CUSTOMOCF 00:05:59.915 #undef SPDK_CONFIG_DAOS 00:05:59.915 #define SPDK_CONFIG_DAOS_DIR 00:05:59.915 #define SPDK_CONFIG_DEBUG 1 00:05:59.915 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:59.915 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:59.915 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:59.915 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:59.915 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:59.915 #undef SPDK_CONFIG_DPDK_UADK 00:05:59.915 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:59.915 #define SPDK_CONFIG_EXAMPLES 1 00:05:59.915 #undef SPDK_CONFIG_FC 00:05:59.915 #define SPDK_CONFIG_FC_PATH 00:05:59.915 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:59.915 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:59.915 #undef SPDK_CONFIG_FUSE 00:05:59.915 #undef SPDK_CONFIG_FUZZER 00:05:59.915 #define SPDK_CONFIG_FUZZER_LIB 00:05:59.915 #undef SPDK_CONFIG_GOLANG 00:05:59.915 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:59.915 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:59.915 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:59.915 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:59.915 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:59.915 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:59.915 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:59.915 #define SPDK_CONFIG_IDXD 1 00:05:59.915 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:59.915 #undef SPDK_CONFIG_IPSEC_MB 00:05:59.915 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:59.915 #define SPDK_CONFIG_ISAL 1 00:05:59.915 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:59.915 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:59.915 #define SPDK_CONFIG_LIBDIR 00:05:59.915 #undef SPDK_CONFIG_LTO 00:05:59.915 #define SPDK_CONFIG_MAX_LCORES 00:05:59.915 #define SPDK_CONFIG_NVME_CUSE 1 00:05:59.915 #undef SPDK_CONFIG_OCF 00:05:59.915 #define SPDK_CONFIG_OCF_PATH 00:05:59.915 #define SPDK_CONFIG_OPENSSL_PATH 00:05:59.915 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:59.915 #define SPDK_CONFIG_PGO_DIR 00:05:59.915 #undef SPDK_CONFIG_PGO_USE 00:05:59.915 #define SPDK_CONFIG_PREFIX /usr/local 00:05:59.915 #undef SPDK_CONFIG_RAID5F 00:05:59.915 #undef SPDK_CONFIG_RBD 00:05:59.915 #define SPDK_CONFIG_RDMA 1 00:05:59.915 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:59.915 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:59.915 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:59.915 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:59.915 #define SPDK_CONFIG_SHARED 1 00:05:59.915 #undef SPDK_CONFIG_SMA 00:05:59.915 #define SPDK_CONFIG_TESTS 1 00:05:59.915 #undef SPDK_CONFIG_TSAN 00:05:59.915 #define SPDK_CONFIG_UBLK 1 00:05:59.915 #define SPDK_CONFIG_UBSAN 1 00:05:59.915 #undef SPDK_CONFIG_UNIT_TESTS 00:05:59.915 #undef SPDK_CONFIG_URING 00:05:59.915 #define SPDK_CONFIG_URING_PATH 00:05:59.915 #undef SPDK_CONFIG_URING_ZNS 00:05:59.915 #undef SPDK_CONFIG_USDT 00:05:59.915 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:59.915 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:59.915 #define SPDK_CONFIG_VFIO_USER 1 00:05:59.915 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:59.915 #define SPDK_CONFIG_VHOST 1 00:05:59.915 #define SPDK_CONFIG_VIRTIO 1 00:05:59.915 #undef SPDK_CONFIG_VTUNE 00:05:59.915 #define SPDK_CONFIG_VTUNE_DIR 00:05:59.915 #define SPDK_CONFIG_WERROR 1 00:05:59.915 #define SPDK_CONFIG_WPDK_DIR 00:05:59.915 #undef SPDK_CONFIG_XNVME 00:05:59.915 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:59.915 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:59.916 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2198379 ]] 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2198379 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.NsAcjR 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.NsAcjR/tests/target /tmp/spdk.NsAcjR 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=968667136 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4315762688 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=48380366848 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=13614362624 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:59.917 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941728768 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389978112 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8970240 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995755008 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1609728 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:05:59.918 * Looking for test storage... 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=48380366848 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=15828955136 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.918 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:59.919 02:22:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:03.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:03.200 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:03.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:03.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:03.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:03.201 02:22:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:03.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:06:03.201 00:06:03.201 --- 10.0.0.2 ping statistics --- 00:06:03.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.201 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:06:03.201 00:06:03.201 --- 10.0.0.1 ping statistics --- 00:06:03.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.201 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.201 ************************************ 00:06:03.201 START TEST nvmf_filesystem_no_in_capsule 00:06:03.201 ************************************ 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2200307 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2200307 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2200307 ']' 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.201 02:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.201 [2024-05-15 02:22:50.128550] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:06:03.201 [2024-05-15 02:22:50.128637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.201 [2024-05-15 02:22:50.208806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.201 [2024-05-15 02:22:50.332763] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.201 [2024-05-15 02:22:50.332822] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.201 [2024-05-15 02:22:50.332838] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.201 [2024-05-15 02:22:50.332856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.201 [2024-05-15 02:22:50.332868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.202 [2024-05-15 02:22:50.332969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.202 [2024-05-15 02:22:50.332997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.202 [2024-05-15 02:22:50.333023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.202 [2024-05-15 02:22:50.333026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.769 [2024-05-15 02:22:51.101868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.769 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.028 Malloc1 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.028 [2024-05-15 02:22:51.277374] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:04.028 [2024-05-15 02:22:51.277676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:04.028 { 00:06:04.028 "name": "Malloc1", 00:06:04.028 "aliases": [ 00:06:04.028 "8c38f31f-ad1c-437f-9ae8-28e3d3a8c62c" 00:06:04.028 ], 00:06:04.028 "product_name": "Malloc disk", 00:06:04.028 "block_size": 512, 00:06:04.028 "num_blocks": 1048576, 00:06:04.028 "uuid": "8c38f31f-ad1c-437f-9ae8-28e3d3a8c62c", 00:06:04.028 "assigned_rate_limits": { 00:06:04.028 "rw_ios_per_sec": 0, 00:06:04.028 "rw_mbytes_per_sec": 0, 00:06:04.028 "r_mbytes_per_sec": 0, 00:06:04.028 "w_mbytes_per_sec": 0 00:06:04.028 }, 00:06:04.028 "claimed": true, 00:06:04.028 "claim_type": "exclusive_write", 00:06:04.028 "zoned": false, 00:06:04.028 "supported_io_types": { 00:06:04.028 "read": true, 00:06:04.028 "write": true, 00:06:04.028 "unmap": true, 00:06:04.028 "write_zeroes": true, 00:06:04.028 "flush": true, 00:06:04.028 "reset": true, 00:06:04.028 "compare": false, 00:06:04.028 "compare_and_write": false, 00:06:04.028 "abort": true, 00:06:04.028 "nvme_admin": false, 00:06:04.028 "nvme_io": false 00:06:04.028 }, 00:06:04.028 "memory_domains": [ 00:06:04.028 { 00:06:04.028 "dma_device_id": "system", 00:06:04.028 "dma_device_type": 1 00:06:04.028 }, 00:06:04.028 { 00:06:04.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.028 "dma_device_type": 2 00:06:04.028 } 00:06:04.028 ], 00:06:04.028 "driver_specific": {} 00:06:04.028 } 00:06:04.028 ]' 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:04.028 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:04.029 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:04.029 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:04.029 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:04.029 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:04.029 02:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:04.961 02:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:04.961 02:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:04.961 02:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:04.961 02:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:04.961 02:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:06.857 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:07.116 02:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:08.049 02:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.046 ************************************ 00:06:09.046 START TEST filesystem_ext4 00:06:09.046 ************************************ 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:09.046 02:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:09.046 mke2fs 1.46.5 (30-Dec-2021) 00:06:09.305 Discarding device blocks: 0/522240 done 00:06:09.305 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:09.305 Filesystem UUID: 0611b8ab-a567-45ec-891c-675ae5a98825 00:06:09.305 Superblock backups stored on blocks: 00:06:09.305 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:09.305 00:06:09.305 Allocating group tables: 0/64 done 00:06:09.305 Writing inode tables: 0/64 done 00:06:12.585 Creating journal (8192 blocks): done 00:06:13.101 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:06:13.101 00:06:13.101 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:13.101 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:13.359 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:13.359 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:13.359 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:13.359 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2200307 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:13.360 00:06:13.360 real 0m4.333s 00:06:13.360 user 0m0.018s 00:06:13.360 sys 0m0.039s 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:13.360 ************************************ 00:06:13.360 END TEST filesystem_ext4 00:06:13.360 ************************************ 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:13.360 ************************************ 00:06:13.360 START TEST filesystem_btrfs 00:06:13.360 ************************************ 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:13.360 02:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:13.926 btrfs-progs v6.6.2 00:06:13.926 See https://btrfs.readthedocs.io for more information. 00:06:13.926 00:06:13.926 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:13.926 NOTE: several default settings have changed in version 5.15, please make sure 00:06:13.926 this does not affect your deployments: 00:06:13.926 - DUP for metadata (-m dup) 00:06:13.926 - enabled no-holes (-O no-holes) 00:06:13.926 - enabled free-space-tree (-R free-space-tree) 00:06:13.926 00:06:13.926 Label: (null) 00:06:13.926 UUID: 41bb9512-dfa1-4e58-aa87-a903aa305ec2 00:06:13.926 Node size: 16384 00:06:13.926 Sector size: 4096 00:06:13.926 Filesystem size: 510.00MiB 00:06:13.926 Block group profiles: 00:06:13.926 Data: single 8.00MiB 00:06:13.926 Metadata: DUP 32.00MiB 00:06:13.926 System: DUP 8.00MiB 00:06:13.926 SSD detected: yes 00:06:13.926 Zoned device: no 00:06:13.926 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:13.926 Runtime features: free-space-tree 00:06:13.926 Checksum: crc32c 00:06:13.926 Number of devices: 1 00:06:13.926 Devices: 00:06:13.926 ID SIZE PATH 00:06:13.926 1 510.00MiB /dev/nvme0n1p1 00:06:13.926 00:06:13.926 02:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:13.926 02:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2200307 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:14.861 00:06:14.861 real 0m1.346s 00:06:14.861 user 0m0.010s 00:06:14.861 sys 0m0.042s 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:14.861 ************************************ 00:06:14.861 END TEST filesystem_btrfs 00:06:14.861 ************************************ 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.861 ************************************ 00:06:14.861 START TEST filesystem_xfs 00:06:14.861 ************************************ 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:14.861 02:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:14.861 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:14.861 = sectsz=512 attr=2, projid32bit=1 00:06:14.861 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:14.861 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:14.861 data = bsize=4096 blocks=130560, imaxpct=25 00:06:14.861 = sunit=0 swidth=0 blks 00:06:14.861 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:14.861 log =internal log bsize=4096 blocks=16384, version=2 00:06:14.861 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:14.861 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:15.795 Discarding blocks...Done. 00:06:15.795 02:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:15.795 02:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2200307 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:18.320 00:06:18.320 real 0m3.437s 00:06:18.320 user 0m0.015s 00:06:18.320 sys 0m0.035s 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:18.320 ************************************ 00:06:18.320 END TEST filesystem_xfs 00:06:18.320 ************************************ 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:18.320 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:18.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2200307 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2200307 ']' 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2200307 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2200307 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2200307' 00:06:18.578 killing process with pid 2200307 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2200307 00:06:18.578 [2024-05-15 02:23:05.815014] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:18.578 02:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2200307 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:19.145 00:06:19.145 real 0m16.230s 00:06:19.145 user 1m2.446s 00:06:19.145 sys 0m2.067s 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.145 ************************************ 00:06:19.145 END TEST nvmf_filesystem_no_in_capsule 00:06:19.145 ************************************ 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.145 ************************************ 00:06:19.145 START TEST nvmf_filesystem_in_capsule 00:06:19.145 ************************************ 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2202413 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2202413 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2202413 ']' 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.145 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.145 [2024-05-15 02:23:06.415396] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:06:19.145 [2024-05-15 02:23:06.415470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.145 [2024-05-15 02:23:06.498125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.404 [2024-05-15 02:23:06.619671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.404 [2024-05-15 02:23:06.619752] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.404 [2024-05-15 02:23:06.619770] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.404 [2024-05-15 02:23:06.619784] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.404 [2024-05-15 02:23:06.619796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.404 [2024-05-15 02:23:06.619866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.404 [2024-05-15 02:23:06.619921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.404 [2024-05-15 02:23:06.619975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.404 [2024-05-15 02:23:06.619980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.404 [2024-05-15 02:23:06.769579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.404 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.663 Malloc1 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.663 [2024-05-15 02:23:06.944182] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:19.663 [2024-05-15 02:23:06.944515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:19.663 { 00:06:19.663 "name": "Malloc1", 00:06:19.663 "aliases": [ 00:06:19.663 "a9eb434c-b2be-4dfb-9a79-65a720811524" 00:06:19.663 ], 00:06:19.663 "product_name": "Malloc disk", 00:06:19.663 "block_size": 512, 00:06:19.663 "num_blocks": 1048576, 00:06:19.663 "uuid": "a9eb434c-b2be-4dfb-9a79-65a720811524", 00:06:19.663 "assigned_rate_limits": { 00:06:19.663 "rw_ios_per_sec": 0, 00:06:19.663 "rw_mbytes_per_sec": 0, 00:06:19.663 "r_mbytes_per_sec": 0, 00:06:19.663 "w_mbytes_per_sec": 0 00:06:19.663 }, 00:06:19.663 "claimed": true, 00:06:19.663 "claim_type": "exclusive_write", 00:06:19.663 "zoned": false, 00:06:19.663 "supported_io_types": { 00:06:19.663 "read": true, 00:06:19.663 "write": true, 00:06:19.663 "unmap": true, 00:06:19.663 "write_zeroes": true, 00:06:19.663 "flush": true, 00:06:19.663 "reset": true, 00:06:19.663 "compare": false, 00:06:19.663 "compare_and_write": false, 00:06:19.663 "abort": true, 00:06:19.663 "nvme_admin": false, 00:06:19.663 "nvme_io": false 00:06:19.663 }, 00:06:19.663 "memory_domains": [ 00:06:19.663 { 00:06:19.663 "dma_device_id": "system", 00:06:19.663 "dma_device_type": 1 00:06:19.663 }, 00:06:19.663 { 00:06:19.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.663 "dma_device_type": 2 00:06:19.663 } 00:06:19.663 ], 00:06:19.663 "driver_specific": {} 00:06:19.663 } 00:06:19.663 ]' 00:06:19.663 02:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:19.663 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:19.664 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:19.664 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:19.664 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:19.664 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:19.664 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:19.664 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:20.598 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:20.598 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:20.598 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:20.598 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:20.598 02:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:22.497 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:22.755 02:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:23.013 02:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:23.946 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:23.946 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:23.946 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:23.946 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.946 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.204 ************************************ 00:06:24.205 START TEST filesystem_in_capsule_ext4 00:06:24.205 ************************************ 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:24.205 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:24.205 mke2fs 1.46.5 (30-Dec-2021) 00:06:24.205 Discarding device blocks: 0/522240 done 00:06:24.205 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:24.205 Filesystem UUID: 42d58c1a-1c6c-4513-9220-405bad07a17f 00:06:24.205 Superblock backups stored on blocks: 00:06:24.205 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:24.205 00:06:24.205 Allocating group tables: 0/64 done 00:06:24.205 Writing inode tables: 0/64 done 00:06:24.462 Creating journal (8192 blocks): done 00:06:24.462 Writing superblocks and filesystem accounting information: 0/64 done 00:06:24.462 00:06:24.462 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:24.462 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2202413 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.720 00:06:24.720 real 0m0.616s 00:06:24.720 user 0m0.014s 00:06:24.720 sys 0m0.039s 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.720 02:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:24.720 ************************************ 00:06:24.720 END TEST filesystem_in_capsule_ext4 00:06:24.720 ************************************ 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.720 ************************************ 00:06:24.720 START TEST filesystem_in_capsule_btrfs 00:06:24.720 ************************************ 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:24.720 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:25.366 btrfs-progs v6.6.2 00:06:25.366 See https://btrfs.readthedocs.io for more information. 00:06:25.366 00:06:25.366 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:25.366 NOTE: several default settings have changed in version 5.15, please make sure 00:06:25.366 this does not affect your deployments: 00:06:25.366 - DUP for metadata (-m dup) 00:06:25.366 - enabled no-holes (-O no-holes) 00:06:25.366 - enabled free-space-tree (-R free-space-tree) 00:06:25.366 00:06:25.366 Label: (null) 00:06:25.366 UUID: d495545d-3ef4-4ff0-bf38-fea35da58b0d 00:06:25.366 Node size: 16384 00:06:25.366 Sector size: 4096 00:06:25.366 Filesystem size: 510.00MiB 00:06:25.366 Block group profiles: 00:06:25.366 Data: single 8.00MiB 00:06:25.366 Metadata: DUP 32.00MiB 00:06:25.366 System: DUP 8.00MiB 00:06:25.366 SSD detected: yes 00:06:25.366 Zoned device: no 00:06:25.366 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:25.366 Runtime features: free-space-tree 00:06:25.366 Checksum: crc32c 00:06:25.366 Number of devices: 1 00:06:25.366 Devices: 00:06:25.366 ID SIZE PATH 00:06:25.366 1 510.00MiB /dev/nvme0n1p1 00:06:25.366 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2202413 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:25.366 00:06:25.366 real 0m0.641s 00:06:25.366 user 0m0.022s 00:06:25.366 sys 0m0.038s 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:25.366 ************************************ 00:06:25.366 END TEST filesystem_in_capsule_btrfs 00:06:25.366 ************************************ 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.366 ************************************ 00:06:25.366 START TEST filesystem_in_capsule_xfs 00:06:25.366 ************************************ 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:25.366 02:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:25.625 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:25.625 = sectsz=512 attr=2, projid32bit=1 00:06:25.625 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:25.625 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:25.625 data = bsize=4096 blocks=130560, imaxpct=25 00:06:25.625 = sunit=0 swidth=0 blks 00:06:25.625 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:25.625 log =internal log bsize=4096 blocks=16384, version=2 00:06:25.625 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:25.625 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:26.203 Discarding blocks...Done. 00:06:26.203 02:23:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:26.203 02:23:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2202413 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:28.737 00:06:28.737 real 0m3.053s 00:06:28.737 user 0m0.014s 00:06:28.737 sys 0m0.040s 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:28.737 ************************************ 00:06:28.737 END TEST filesystem_in_capsule_xfs 00:06:28.737 ************************************ 00:06:28.737 02:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:28.737 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:28.737 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:28.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2202413 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2202413 ']' 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2202413 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2202413 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2202413' 00:06:28.996 killing process with pid 2202413 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2202413 00:06:28.996 [2024-05-15 02:23:16.227830] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:28.996 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2202413 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:29.563 00:06:29.563 real 0m10.342s 00:06:29.563 user 0m39.256s 00:06:29.563 sys 0m1.562s 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.563 ************************************ 00:06:29.563 END TEST nvmf_filesystem_in_capsule 00:06:29.563 ************************************ 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:29.563 rmmod nvme_tcp 00:06:29.563 rmmod nvme_fabrics 00:06:29.563 rmmod nvme_keyring 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:29.563 02:23:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.467 02:23:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:31.467 00:06:31.467 real 0m31.760s 00:06:31.467 user 1m42.872s 00:06:31.467 sys 0m5.671s 00:06:31.467 02:23:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.467 02:23:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.467 ************************************ 00:06:31.467 END TEST nvmf_filesystem 00:06:31.467 ************************************ 00:06:31.467 02:23:18 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:31.467 02:23:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:31.467 02:23:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.467 02:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.725 ************************************ 00:06:31.725 START TEST nvmf_target_discovery 00:06:31.725 ************************************ 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:31.725 * Looking for test storage... 00:06:31.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:31.725 02:23:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:34.257 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:34.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:34.258 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:34.258 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:34.258 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:34.258 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:34.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:06:34.517 00:06:34.517 --- 10.0.0.2 ping statistics --- 00:06:34.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.517 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:06:34.517 00:06:34.517 --- 10.0.0.1 ping statistics --- 00:06:34.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.517 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2206172 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2206172 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2206172 ']' 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.517 02:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 [2024-05-15 02:23:21.792288] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:06:34.517 [2024-05-15 02:23:21.792374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.517 [2024-05-15 02:23:21.873948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.775 [2024-05-15 02:23:21.997726] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.775 [2024-05-15 02:23:21.997782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.775 [2024-05-15 02:23:21.997799] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.775 [2024-05-15 02:23:21.997812] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.775 [2024-05-15 02:23:21.997831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.775 [2024-05-15 02:23:21.997889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.775 [2024-05-15 02:23:21.997952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.775 [2024-05-15 02:23:21.997984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.775 [2024-05-15 02:23:21.997987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 [2024-05-15 02:23:22.807057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 Null1 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 [2024-05-15 02:23:22.847069] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:35.709 [2024-05-15 02:23:22.847370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 Null2 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 Null3 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.709 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 Null4 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:35.710 00:06:35.710 Discovery Log Number of Records 6, Generation counter 6 00:06:35.710 =====Discovery Log Entry 0====== 00:06:35.710 trtype: tcp 00:06:35.710 adrfam: ipv4 00:06:35.710 subtype: current discovery subsystem 00:06:35.710 treq: not required 00:06:35.710 portid: 0 00:06:35.710 trsvcid: 4420 00:06:35.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:35.710 traddr: 10.0.0.2 00:06:35.710 eflags: explicit discovery connections, duplicate discovery information 00:06:35.710 sectype: none 00:06:35.710 =====Discovery Log Entry 1====== 00:06:35.710 trtype: tcp 00:06:35.710 adrfam: ipv4 00:06:35.710 subtype: nvme subsystem 00:06:35.710 treq: not required 00:06:35.710 portid: 0 00:06:35.710 trsvcid: 4420 00:06:35.710 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:35.710 traddr: 10.0.0.2 00:06:35.710 eflags: none 00:06:35.710 sectype: none 00:06:35.710 =====Discovery Log Entry 2====== 00:06:35.710 trtype: tcp 00:06:35.710 adrfam: ipv4 00:06:35.710 subtype: nvme subsystem 00:06:35.710 treq: not required 00:06:35.710 portid: 0 00:06:35.710 trsvcid: 4420 00:06:35.710 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:35.710 traddr: 10.0.0.2 00:06:35.710 eflags: none 00:06:35.710 sectype: none 00:06:35.710 =====Discovery Log Entry 3====== 00:06:35.710 trtype: tcp 00:06:35.710 adrfam: ipv4 00:06:35.710 subtype: nvme subsystem 00:06:35.710 treq: not required 00:06:35.710 portid: 0 00:06:35.710 trsvcid: 4420 00:06:35.710 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:35.710 traddr: 10.0.0.2 00:06:35.710 eflags: none 00:06:35.710 sectype: none 00:06:35.710 =====Discovery Log Entry 4====== 00:06:35.710 trtype: tcp 00:06:35.710 adrfam: ipv4 00:06:35.710 subtype: nvme subsystem 00:06:35.710 treq: not required 00:06:35.710 portid: 0 00:06:35.710 trsvcid: 4420 00:06:35.710 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:35.710 traddr: 10.0.0.2 00:06:35.710 eflags: none 00:06:35.710 sectype: none 00:06:35.710 =====Discovery Log Entry 5====== 00:06:35.710 trtype: tcp 00:06:35.710 adrfam: ipv4 00:06:35.710 subtype: discovery subsystem referral 00:06:35.710 treq: not required 00:06:35.710 portid: 0 00:06:35.710 trsvcid: 4430 00:06:35.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:35.710 traddr: 10.0.0.2 00:06:35.710 eflags: none 00:06:35.710 sectype: none 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:35.710 Perform nvmf subsystem discovery via RPC 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 [ 00:06:35.710 { 00:06:35.710 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:35.710 "subtype": "Discovery", 00:06:35.710 "listen_addresses": [ 00:06:35.710 { 00:06:35.710 "trtype": "TCP", 00:06:35.710 "adrfam": "IPv4", 00:06:35.710 "traddr": "10.0.0.2", 00:06:35.710 "trsvcid": "4420" 00:06:35.710 } 00:06:35.710 ], 00:06:35.710 "allow_any_host": true, 00:06:35.710 "hosts": [] 00:06:35.710 }, 00:06:35.710 { 00:06:35.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:35.710 "subtype": "NVMe", 00:06:35.710 "listen_addresses": [ 00:06:35.710 { 00:06:35.710 "trtype": "TCP", 00:06:35.710 "adrfam": "IPv4", 00:06:35.710 "traddr": "10.0.0.2", 00:06:35.710 "trsvcid": "4420" 00:06:35.710 } 00:06:35.710 ], 00:06:35.710 "allow_any_host": true, 00:06:35.710 "hosts": [], 00:06:35.710 "serial_number": "SPDK00000000000001", 00:06:35.710 "model_number": "SPDK bdev Controller", 00:06:35.710 "max_namespaces": 32, 00:06:35.710 "min_cntlid": 1, 00:06:35.710 "max_cntlid": 65519, 00:06:35.710 "namespaces": [ 00:06:35.710 { 00:06:35.710 "nsid": 1, 00:06:35.710 "bdev_name": "Null1", 00:06:35.710 "name": "Null1", 00:06:35.710 "nguid": "58EFA9A555FC4A6AB1F8FD8A3159A748", 00:06:35.710 "uuid": "58efa9a5-55fc-4a6a-b1f8-fd8a3159a748" 00:06:35.710 } 00:06:35.710 ] 00:06:35.710 }, 00:06:35.710 { 00:06:35.710 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:35.710 "subtype": "NVMe", 00:06:35.710 "listen_addresses": [ 00:06:35.710 { 00:06:35.710 "trtype": "TCP", 00:06:35.710 "adrfam": "IPv4", 00:06:35.710 "traddr": "10.0.0.2", 00:06:35.710 "trsvcid": "4420" 00:06:35.710 } 00:06:35.710 ], 00:06:35.710 "allow_any_host": true, 00:06:35.710 "hosts": [], 00:06:35.710 "serial_number": "SPDK00000000000002", 00:06:35.710 "model_number": "SPDK bdev Controller", 00:06:35.710 "max_namespaces": 32, 00:06:35.710 "min_cntlid": 1, 00:06:35.710 "max_cntlid": 65519, 00:06:35.710 "namespaces": [ 00:06:35.710 { 00:06:35.710 "nsid": 1, 00:06:35.710 "bdev_name": "Null2", 00:06:35.710 "name": "Null2", 00:06:35.710 "nguid": "DD5A385E726947B0BA84F9AB01508C3D", 00:06:35.710 "uuid": "dd5a385e-7269-47b0-ba84-f9ab01508c3d" 00:06:35.710 } 00:06:35.710 ] 00:06:35.710 }, 00:06:35.710 { 00:06:35.710 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:35.710 "subtype": "NVMe", 00:06:35.710 "listen_addresses": [ 00:06:35.710 { 00:06:35.710 "trtype": "TCP", 00:06:35.710 "adrfam": "IPv4", 00:06:35.710 "traddr": "10.0.0.2", 00:06:35.710 "trsvcid": "4420" 00:06:35.710 } 00:06:35.710 ], 00:06:35.710 "allow_any_host": true, 00:06:35.710 "hosts": [], 00:06:35.710 "serial_number": "SPDK00000000000003", 00:06:35.710 "model_number": "SPDK bdev Controller", 00:06:35.710 "max_namespaces": 32, 00:06:35.710 "min_cntlid": 1, 00:06:35.710 "max_cntlid": 65519, 00:06:35.710 "namespaces": [ 00:06:35.710 { 00:06:35.710 "nsid": 1, 00:06:35.710 "bdev_name": "Null3", 00:06:35.710 "name": "Null3", 00:06:35.710 "nguid": "84A1973596854B259BD8A74DDC1B528E", 00:06:35.710 "uuid": "84a19735-9685-4b25-9bd8-a74ddc1b528e" 00:06:35.710 } 00:06:35.710 ] 00:06:35.710 }, 00:06:35.710 { 00:06:35.710 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:35.710 "subtype": "NVMe", 00:06:35.710 "listen_addresses": [ 00:06:35.710 { 00:06:35.710 "trtype": "TCP", 00:06:35.710 "adrfam": "IPv4", 00:06:35.710 "traddr": "10.0.0.2", 00:06:35.710 "trsvcid": "4420" 00:06:35.710 } 00:06:35.710 ], 00:06:35.710 "allow_any_host": true, 00:06:35.710 "hosts": [], 00:06:35.710 "serial_number": "SPDK00000000000004", 00:06:35.710 "model_number": "SPDK bdev Controller", 00:06:35.710 "max_namespaces": 32, 00:06:35.710 "min_cntlid": 1, 00:06:35.710 "max_cntlid": 65519, 00:06:35.710 "namespaces": [ 00:06:35.710 { 00:06:35.710 "nsid": 1, 00:06:35.710 "bdev_name": "Null4", 00:06:35.710 "name": "Null4", 00:06:35.710 "nguid": "0E072A1B6CBC49399A48DF10EF847BF8", 00:06:35.710 "uuid": "0e072a1b-6cbc-4939-9a48-df10ef847bf8" 00:06:35.710 } 00:06:35.710 ] 00:06:35.710 } 00:06:35.710 ] 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.710 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.711 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.969 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.969 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:35.969 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:35.969 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.969 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:35.970 rmmod nvme_tcp 00:06:35.970 rmmod nvme_fabrics 00:06:35.970 rmmod nvme_keyring 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2206172 ']' 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2206172 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2206172 ']' 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2206172 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2206172 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2206172' 00:06:35.970 killing process with pid 2206172 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2206172 00:06:35.970 [2024-05-15 02:23:23.240052] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:35.970 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2206172 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.229 02:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.766 02:23:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:38.766 00:06:38.766 real 0m6.680s 00:06:38.766 user 0m7.159s 00:06:38.766 sys 0m2.315s 00:06:38.766 02:23:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.766 02:23:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.766 ************************************ 00:06:38.766 END TEST nvmf_target_discovery 00:06:38.766 ************************************ 00:06:38.766 02:23:25 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:38.766 02:23:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:38.766 02:23:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.766 02:23:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.766 ************************************ 00:06:38.766 START TEST nvmf_referrals 00:06:38.766 ************************************ 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:38.766 * Looking for test storage... 00:06:38.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.766 02:23:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.767 02:23:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:41.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:41.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:41.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:41.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:41.300 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:41.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:06:41.301 00:06:41.301 --- 10.0.0.2 ping statistics --- 00:06:41.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.301 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:06:41.301 00:06:41.301 --- 10.0.0.1 ping statistics --- 00:06:41.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.301 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2208821 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2208821 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2208821 ']' 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.301 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.301 [2024-05-15 02:23:28.465176] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:06:41.301 [2024-05-15 02:23:28.465271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.301 [2024-05-15 02:23:28.543627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.301 [2024-05-15 02:23:28.655429] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.301 [2024-05-15 02:23:28.655499] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.301 [2024-05-15 02:23:28.655513] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.301 [2024-05-15 02:23:28.655523] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.301 [2024-05-15 02:23:28.655532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.301 [2024-05-15 02:23:28.655614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.301 [2024-05-15 02:23:28.655648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.301 [2024-05-15 02:23:28.655701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.301 [2024-05-15 02:23:28.655703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.559 [2024-05-15 02:23:28.821660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.559 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 [2024-05-15 02:23:28.833631] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:41.560 [2024-05-15 02:23:28.833896] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:41.560 02:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.076 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.341 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.643 02:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.643 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.902 rmmod nvme_tcp 00:06:42.902 rmmod nvme_fabrics 00:06:42.902 rmmod nvme_keyring 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2208821 ']' 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2208821 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2208821 ']' 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2208821 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2208821 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2208821' 00:06:42.902 killing process with pid 2208821 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2208821 00:06:42.902 [2024-05-15 02:23:30.118463] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:42.902 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2208821 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.162 02:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.068 02:23:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.068 00:06:45.068 real 0m6.823s 00:06:45.068 user 0m8.338s 00:06:45.068 sys 0m2.364s 00:06:45.068 02:23:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.068 02:23:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.068 ************************************ 00:06:45.068 END TEST nvmf_referrals 00:06:45.068 ************************************ 00:06:45.068 02:23:32 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:45.068 02:23:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:45.068 02:23:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.068 02:23:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 ************************************ 00:06:45.327 START TEST nvmf_connect_disconnect 00:06:45.327 ************************************ 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:45.327 * Looking for test storage... 00:06:45.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.327 02:23:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.861 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:06:47.862 00:06:47.862 --- 10.0.0.2 ping statistics --- 00:06:47.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.862 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:06:47.862 00:06:47.862 --- 10.0.0.1 ping statistics --- 00:06:47.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.862 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2211434 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2211434 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2211434 ']' 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.862 02:23:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:47.862 [2024-05-15 02:23:35.265646] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:06:47.862 [2024-05-15 02:23:35.265746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.121 [2024-05-15 02:23:35.351900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.121 [2024-05-15 02:23:35.472703] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.121 [2024-05-15 02:23:35.472775] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.121 [2024-05-15 02:23:35.472791] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.121 [2024-05-15 02:23:35.472805] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.121 [2024-05-15 02:23:35.472817] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.121 [2024-05-15 02:23:35.472910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.121 [2024-05-15 02:23:35.472996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.121 [2024-05-15 02:23:35.472968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.121 [2024-05-15 02:23:35.472999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.055 [2024-05-15 02:23:36.275089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.055 [2024-05-15 02:23:36.336275] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:49.055 [2024-05-15 02:23:36.336617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:49.055 02:23:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:52.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.252 rmmod nvme_tcp 00:07:03.252 rmmod nvme_fabrics 00:07:03.252 rmmod nvme_keyring 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2211434 ']' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2211434 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2211434 ']' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2211434 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2211434 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2211434' 00:07:03.252 killing process with pid 2211434 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2211434 00:07:03.252 [2024-05-15 02:23:50.109811] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2211434 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.252 02:23:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.152 02:23:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.152 00:07:05.152 real 0m19.947s 00:07:05.152 user 0m59.445s 00:07:05.152 sys 0m3.491s 00:07:05.152 02:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.152 02:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.152 ************************************ 00:07:05.152 END TEST nvmf_connect_disconnect 00:07:05.152 ************************************ 00:07:05.152 02:23:52 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:05.152 02:23:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.152 02:23:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.152 02:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.152 ************************************ 00:07:05.152 START TEST nvmf_multitarget 00:07:05.152 ************************************ 00:07:05.152 02:23:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:05.152 * Looking for test storage... 00:07:05.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.410 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.411 02:23:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.940 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.940 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.940 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.940 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.940 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:07:07.941 00:07:07.941 --- 10.0.0.2 ping statistics --- 00:07:07.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.941 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:07:07.941 00:07:07.941 --- 10.0.0.1 ping statistics --- 00:07:07.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.941 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2215504 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2215504 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2215504 ']' 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.941 02:23:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.941 [2024-05-15 02:23:55.256016] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:07:07.941 [2024-05-15 02:23:55.256094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.941 [2024-05-15 02:23:55.336301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.199 [2024-05-15 02:23:55.457661] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.199 [2024-05-15 02:23:55.457732] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.199 [2024-05-15 02:23:55.457748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.199 [2024-05-15 02:23:55.457762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.199 [2024-05-15 02:23:55.457774] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.199 [2024-05-15 02:23:55.457867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.199 [2024-05-15 02:23:55.457954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.199 [2024-05-15 02:23:55.458004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.199 [2024-05-15 02:23:55.458007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:09.131 "nvmf_tgt_1" 00:07:09.131 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:09.388 "nvmf_tgt_2" 00:07:09.388 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.388 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:09.388 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:09.388 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:09.388 true 00:07:09.646 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:09.646 true 00:07:09.646 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.646 02:23:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:09.646 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:09.646 rmmod nvme_tcp 00:07:09.646 rmmod nvme_fabrics 00:07:09.905 rmmod nvme_keyring 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2215504 ']' 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2215504 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2215504 ']' 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2215504 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2215504 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2215504' 00:07:09.905 killing process with pid 2215504 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2215504 00:07:09.905 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2215504 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.163 02:23:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.070 02:23:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:12.070 00:07:12.070 real 0m6.935s 00:07:12.070 user 0m9.419s 00:07:12.070 sys 0m2.305s 00:07:12.070 02:23:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.070 02:23:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:12.070 ************************************ 00:07:12.070 END TEST nvmf_multitarget 00:07:12.070 ************************************ 00:07:12.070 02:23:59 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:12.070 02:23:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:12.070 02:23:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.070 02:23:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.328 ************************************ 00:07:12.328 START TEST nvmf_rpc 00:07:12.328 ************************************ 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:12.328 * Looking for test storage... 00:07:12.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.328 02:23:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.329 02:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:14.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:14.863 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:14.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:14.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.863 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:07:14.864 00:07:14.864 --- 10.0.0.2 ping statistics --- 00:07:14.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.864 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:07:14.864 00:07:14.864 --- 10.0.0.1 ping statistics --- 00:07:14.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.864 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2218139 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2218139 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2218139 ']' 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.864 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 [2024-05-15 02:24:02.278202] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:07:15.123 [2024-05-15 02:24:02.278289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.123 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.123 [2024-05-15 02:24:02.368889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.123 [2024-05-15 02:24:02.493367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.123 [2024-05-15 02:24:02.493425] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.123 [2024-05-15 02:24:02.493441] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.123 [2024-05-15 02:24:02.493455] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.123 [2024-05-15 02:24:02.493466] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.123 [2024-05-15 02:24:02.493523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.123 [2024-05-15 02:24:02.493578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.123 [2024-05-15 02:24:02.493603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.123 [2024-05-15 02:24:02.493607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.381 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:15.381 "tick_rate": 2700000000, 00:07:15.381 "poll_groups": [ 00:07:15.381 { 00:07:15.381 "name": "nvmf_tgt_poll_group_000", 00:07:15.381 "admin_qpairs": 0, 00:07:15.381 "io_qpairs": 0, 00:07:15.381 "current_admin_qpairs": 0, 00:07:15.381 "current_io_qpairs": 0, 00:07:15.381 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [] 00:07:15.382 }, 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_001", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [] 00:07:15.382 }, 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_002", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [] 00:07:15.382 }, 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_003", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [] 00:07:15.382 } 00:07:15.382 ] 00:07:15.382 }' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.382 [2024-05-15 02:24:02.719788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:15.382 "tick_rate": 2700000000, 00:07:15.382 "poll_groups": [ 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_000", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [ 00:07:15.382 { 00:07:15.382 "trtype": "TCP" 00:07:15.382 } 00:07:15.382 ] 00:07:15.382 }, 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_001", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [ 00:07:15.382 { 00:07:15.382 "trtype": "TCP" 00:07:15.382 } 00:07:15.382 ] 00:07:15.382 }, 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_002", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [ 00:07:15.382 { 00:07:15.382 "trtype": "TCP" 00:07:15.382 } 00:07:15.382 ] 00:07:15.382 }, 00:07:15.382 { 00:07:15.382 "name": "nvmf_tgt_poll_group_003", 00:07:15.382 "admin_qpairs": 0, 00:07:15.382 "io_qpairs": 0, 00:07:15.382 "current_admin_qpairs": 0, 00:07:15.382 "current_io_qpairs": 0, 00:07:15.382 "pending_bdev_io": 0, 00:07:15.382 "completed_nvme_io": 0, 00:07:15.382 "transports": [ 00:07:15.382 { 00:07:15.382 "trtype": "TCP" 00:07:15.382 } 00:07:15.382 ] 00:07:15.382 } 00:07:15.382 ] 00:07:15.382 }' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:15.382 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 Malloc1 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 [2024-05-15 02:24:02.877398] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:15.641 [2024-05-15 02:24:02.877696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:15.641 [2024-05-15 02:24:02.900226] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:15.641 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:15.641 could not add new controller: failed to write to nvme-fabrics device 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.641 02:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.219 02:24:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.219 02:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:16.219 02:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.219 02:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:16.219 02:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:18.121 02:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.379 02:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.380 [2024-05-15 02:24:05.649512] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:18.380 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:18.380 could not add new controller: failed to write to nvme-fabrics device 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.380 02:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.946 02:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.946 02:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:18.946 02:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.946 02:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:18.946 02:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:20.876 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 [2024-05-15 02:24:08.415410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.135 02:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.701 02:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.701 02:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:21.701 02:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.701 02:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:21.701 02:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.230 [2024-05-15 02:24:11.177124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.230 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.488 02:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.488 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:24.488 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.488 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:24.488 02:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:26.387 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.644 [2024-05-15 02:24:13.853806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.644 02:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.645 02:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.220 02:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.220 02:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:27.220 02:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.220 02:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:27.220 02:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.121 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 [2024-05-15 02:24:16.586102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.379 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.380 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.380 02:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.380 02:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.945 02:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.945 02:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:29.945 02:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.945 02:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:29.945 02:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:31.845 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.104 [2024-05-15 02:24:19.349560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.104 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.670 02:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.670 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:32.670 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.670 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:32.670 02:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:34.569 02:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.826 02:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.826 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:34.826 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:34.826 02:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.826 [2024-05-15 02:24:22.042505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.826 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 [2024-05-15 02:24:22.090549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 [2024-05-15 02:24:22.138709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 [2024-05-15 02:24:22.186884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.827 [2024-05-15 02:24:22.235072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.827 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:35.084 "tick_rate": 2700000000, 00:07:35.084 "poll_groups": [ 00:07:35.084 { 00:07:35.084 "name": "nvmf_tgt_poll_group_000", 00:07:35.084 "admin_qpairs": 2, 00:07:35.084 "io_qpairs": 84, 00:07:35.084 "current_admin_qpairs": 0, 00:07:35.084 "current_io_qpairs": 0, 00:07:35.084 "pending_bdev_io": 0, 00:07:35.084 "completed_nvme_io": 159, 00:07:35.084 "transports": [ 00:07:35.084 { 00:07:35.084 "trtype": "TCP" 00:07:35.084 } 00:07:35.084 ] 00:07:35.084 }, 00:07:35.084 { 00:07:35.084 "name": "nvmf_tgt_poll_group_001", 00:07:35.084 "admin_qpairs": 2, 00:07:35.084 "io_qpairs": 84, 00:07:35.084 "current_admin_qpairs": 0, 00:07:35.084 "current_io_qpairs": 0, 00:07:35.084 "pending_bdev_io": 0, 00:07:35.084 "completed_nvme_io": 88, 00:07:35.084 "transports": [ 00:07:35.084 { 00:07:35.084 "trtype": "TCP" 00:07:35.084 } 00:07:35.084 ] 00:07:35.084 }, 00:07:35.084 { 00:07:35.084 "name": "nvmf_tgt_poll_group_002", 00:07:35.084 "admin_qpairs": 1, 00:07:35.084 "io_qpairs": 84, 00:07:35.084 "current_admin_qpairs": 0, 00:07:35.084 "current_io_qpairs": 0, 00:07:35.084 "pending_bdev_io": 0, 00:07:35.084 "completed_nvme_io": 280, 00:07:35.084 "transports": [ 00:07:35.084 { 00:07:35.084 "trtype": "TCP" 00:07:35.084 } 00:07:35.084 ] 00:07:35.084 }, 00:07:35.084 { 00:07:35.084 "name": "nvmf_tgt_poll_group_003", 00:07:35.084 "admin_qpairs": 2, 00:07:35.084 "io_qpairs": 84, 00:07:35.084 "current_admin_qpairs": 0, 00:07:35.084 "current_io_qpairs": 0, 00:07:35.084 "pending_bdev_io": 0, 00:07:35.084 "completed_nvme_io": 159, 00:07:35.084 "transports": [ 00:07:35.084 { 00:07:35.084 "trtype": "TCP" 00:07:35.084 } 00:07:35.084 ] 00:07:35.084 } 00:07:35.084 ] 00:07:35.084 }' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:35.084 rmmod nvme_tcp 00:07:35.084 rmmod nvme_fabrics 00:07:35.084 rmmod nvme_keyring 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2218139 ']' 00:07:35.084 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2218139 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2218139 ']' 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2218139 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2218139 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2218139' 00:07:35.085 killing process with pid 2218139 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2218139 00:07:35.085 [2024-05-15 02:24:22.430052] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:35.085 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2218139 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.343 02:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.909 02:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.909 00:07:37.909 real 0m25.279s 00:07:37.909 user 1m20.074s 00:07:37.909 sys 0m4.294s 00:07:37.909 02:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.909 02:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 ************************************ 00:07:37.909 END TEST nvmf_rpc 00:07:37.909 ************************************ 00:07:37.909 02:24:24 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:37.909 02:24:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:37.909 02:24:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.909 02:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 ************************************ 00:07:37.909 START TEST nvmf_invalid 00:07:37.909 ************************************ 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:37.909 * Looking for test storage... 00:07:37.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.909 02:24:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.910 02:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:40.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:40.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:40.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:40.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.439 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:40.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:07:40.440 00:07:40.440 --- 10.0.0.2 ping statistics --- 00:07:40.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.440 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:07:40.440 00:07:40.440 --- 10.0.0.1 ping statistics --- 00:07:40.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.440 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2223434 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2223434 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2223434 ']' 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.440 [2024-05-15 02:24:27.521256] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:07:40.440 [2024-05-15 02:24:27.521327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.440 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.440 [2024-05-15 02:24:27.602863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.440 [2024-05-15 02:24:27.713666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.440 [2024-05-15 02:24:27.713722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.440 [2024-05-15 02:24:27.713750] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.440 [2024-05-15 02:24:27.713762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.440 [2024-05-15 02:24:27.713772] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.440 [2024-05-15 02:24:27.713829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.440 [2024-05-15 02:24:27.713853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.440 [2024-05-15 02:24:27.714155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.440 [2024-05-15 02:24:27.714160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.440 02:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.698 02:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.698 02:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:40.698 02:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8119 00:07:40.956 [2024-05-15 02:24:28.144689] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:40.956 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:40.956 { 00:07:40.956 "nqn": "nqn.2016-06.io.spdk:cnode8119", 00:07:40.956 "tgt_name": "foobar", 00:07:40.956 "method": "nvmf_create_subsystem", 00:07:40.956 "req_id": 1 00:07:40.956 } 00:07:40.956 Got JSON-RPC error response 00:07:40.956 response: 00:07:40.956 { 00:07:40.956 "code": -32603, 00:07:40.956 "message": "Unable to find target foobar" 00:07:40.956 }' 00:07:40.956 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:40.956 { 00:07:40.956 "nqn": "nqn.2016-06.io.spdk:cnode8119", 00:07:40.956 "tgt_name": "foobar", 00:07:40.956 "method": "nvmf_create_subsystem", 00:07:40.956 "req_id": 1 00:07:40.956 } 00:07:40.956 Got JSON-RPC error response 00:07:40.956 response: 00:07:40.956 { 00:07:40.956 "code": -32603, 00:07:40.956 "message": "Unable to find target foobar" 00:07:40.956 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:40.956 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:40.956 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28262 00:07:41.214 [2024-05-15 02:24:28.437667] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28262: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:41.214 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:41.214 { 00:07:41.214 "nqn": "nqn.2016-06.io.spdk:cnode28262", 00:07:41.214 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:41.214 "method": "nvmf_create_subsystem", 00:07:41.214 "req_id": 1 00:07:41.214 } 00:07:41.214 Got JSON-RPC error response 00:07:41.215 response: 00:07:41.215 { 00:07:41.215 "code": -32602, 00:07:41.215 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:41.215 }' 00:07:41.215 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:41.215 { 00:07:41.215 "nqn": "nqn.2016-06.io.spdk:cnode28262", 00:07:41.215 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:41.215 "method": "nvmf_create_subsystem", 00:07:41.215 "req_id": 1 00:07:41.215 } 00:07:41.215 Got JSON-RPC error response 00:07:41.215 response: 00:07:41.215 { 00:07:41.215 "code": -32602, 00:07:41.215 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:41.215 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:41.215 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:41.215 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3410 00:07:41.473 [2024-05-15 02:24:28.722671] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3410: invalid model number 'SPDK_Controller' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:41.473 { 00:07:41.473 "nqn": "nqn.2016-06.io.spdk:cnode3410", 00:07:41.473 "model_number": "SPDK_Controller\u001f", 00:07:41.473 "method": "nvmf_create_subsystem", 00:07:41.473 "req_id": 1 00:07:41.473 } 00:07:41.473 Got JSON-RPC error response 00:07:41.473 response: 00:07:41.473 { 00:07:41.473 "code": -32602, 00:07:41.473 "message": "Invalid MN SPDK_Controller\u001f" 00:07:41.473 }' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:41.473 { 00:07:41.473 "nqn": "nqn.2016-06.io.spdk:cnode3410", 00:07:41.473 "model_number": "SPDK_Controller\u001f", 00:07:41.473 "method": "nvmf_create_subsystem", 00:07:41.473 "req_id": 1 00:07:41.473 } 00:07:41.473 Got JSON-RPC error response 00:07:41.473 response: 00:07:41.473 { 00:07:41.473 "code": -32602, 00:07:41.473 "message": "Invalid MN SPDK_Controller\u001f" 00:07:41.473 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:41.473 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '2>.LlQUi;jrMs0ND>q1=G' 00:07:41.474 02:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '2>.LlQUi;jrMs0ND>q1=G' nqn.2016-06.io.spdk:cnode23689 00:07:41.732 [2024-05-15 02:24:29.023605] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23689: invalid serial number '2>.LlQUi;jrMs0ND>q1=G' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:41.732 { 00:07:41.732 "nqn": "nqn.2016-06.io.spdk:cnode23689", 00:07:41.732 "serial_number": "2>.LlQUi;jrMs0ND>q1=G", 00:07:41.732 "method": "nvmf_create_subsystem", 00:07:41.732 "req_id": 1 00:07:41.732 } 00:07:41.732 Got JSON-RPC error response 00:07:41.732 response: 00:07:41.732 { 00:07:41.732 "code": -32602, 00:07:41.732 "message": "Invalid SN 2>.LlQUi;jrMs0ND>q1=G" 00:07:41.732 }' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:41.732 { 00:07:41.732 "nqn": "nqn.2016-06.io.spdk:cnode23689", 00:07:41.732 "serial_number": "2>.LlQUi;jrMs0ND>q1=G", 00:07:41.732 "method": "nvmf_create_subsystem", 00:07:41.732 "req_id": 1 00:07:41.732 } 00:07:41.732 Got JSON-RPC error response 00:07:41.732 response: 00:07:41.732 { 00:07:41.732 "code": -32602, 00:07:41.732 "message": "Invalid SN 2>.LlQUi;jrMs0ND>q1=G" 00:07:41.732 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.732 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.733 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '#yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR' 00:07:41.994 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '#yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR' nqn.2016-06.io.spdk:cnode13226 00:07:42.254 [2024-05-15 02:24:29.420943] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13226: invalid model number '#yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR' 00:07:42.254 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:42.254 { 00:07:42.254 "nqn": "nqn.2016-06.io.spdk:cnode13226", 00:07:42.254 "model_number": "#yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR", 00:07:42.254 "method": "nvmf_create_subsystem", 00:07:42.254 "req_id": 1 00:07:42.254 } 00:07:42.254 Got JSON-RPC error response 00:07:42.254 response: 00:07:42.254 { 00:07:42.254 "code": -32602, 00:07:42.254 "message": "Invalid MN #yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR" 00:07:42.254 }' 00:07:42.254 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:42.254 { 00:07:42.254 "nqn": "nqn.2016-06.io.spdk:cnode13226", 00:07:42.254 "model_number": "#yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR", 00:07:42.254 "method": "nvmf_create_subsystem", 00:07:42.254 "req_id": 1 00:07:42.254 } 00:07:42.254 Got JSON-RPC error response 00:07:42.254 response: 00:07:42.254 { 00:07:42.254 "code": -32602, 00:07:42.254 "message": "Invalid MN #yU?/]Ac^ wf$N9j@3z#?p*,hD=g=UH-;tUmXteRR" 00:07:42.254 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:42.254 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:42.254 [2024-05-15 02:24:29.665834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.512 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:42.770 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:42.770 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:42.770 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:42.770 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:42.770 02:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:42.770 [2024-05-15 02:24:30.183558] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:42.770 [2024-05-15 02:24:30.183682] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:43.028 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:43.028 { 00:07:43.028 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:43.028 "listen_address": { 00:07:43.028 "trtype": "tcp", 00:07:43.028 "traddr": "", 00:07:43.028 "trsvcid": "4421" 00:07:43.028 }, 00:07:43.028 "method": "nvmf_subsystem_remove_listener", 00:07:43.028 "req_id": 1 00:07:43.028 } 00:07:43.028 Got JSON-RPC error response 00:07:43.028 response: 00:07:43.028 { 00:07:43.028 "code": -32602, 00:07:43.028 "message": "Invalid parameters" 00:07:43.028 }' 00:07:43.028 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:43.028 { 00:07:43.028 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:43.028 "listen_address": { 00:07:43.028 "trtype": "tcp", 00:07:43.028 "traddr": "", 00:07:43.028 "trsvcid": "4421" 00:07:43.028 }, 00:07:43.028 "method": "nvmf_subsystem_remove_listener", 00:07:43.028 "req_id": 1 00:07:43.028 } 00:07:43.028 Got JSON-RPC error response 00:07:43.028 response: 00:07:43.028 { 00:07:43.028 "code": -32602, 00:07:43.028 "message": "Invalid parameters" 00:07:43.028 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:43.028 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2829 -i 0 00:07:43.028 [2024-05-15 02:24:30.424320] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2829: invalid cntlid range [0-65519] 00:07:43.286 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:43.286 { 00:07:43.286 "nqn": "nqn.2016-06.io.spdk:cnode2829", 00:07:43.286 "min_cntlid": 0, 00:07:43.286 "method": "nvmf_create_subsystem", 00:07:43.286 "req_id": 1 00:07:43.286 } 00:07:43.286 Got JSON-RPC error response 00:07:43.286 response: 00:07:43.286 { 00:07:43.286 "code": -32602, 00:07:43.286 "message": "Invalid cntlid range [0-65519]" 00:07:43.286 }' 00:07:43.286 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:43.286 { 00:07:43.286 "nqn": "nqn.2016-06.io.spdk:cnode2829", 00:07:43.286 "min_cntlid": 0, 00:07:43.286 "method": "nvmf_create_subsystem", 00:07:43.287 "req_id": 1 00:07:43.287 } 00:07:43.287 Got JSON-RPC error response 00:07:43.287 response: 00:07:43.287 { 00:07:43.287 "code": -32602, 00:07:43.287 "message": "Invalid cntlid range [0-65519]" 00:07:43.287 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:43.287 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12933 -i 65520 00:07:43.287 [2024-05-15 02:24:30.673145] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12933: invalid cntlid range [65520-65519] 00:07:43.287 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:43.287 { 00:07:43.287 "nqn": "nqn.2016-06.io.spdk:cnode12933", 00:07:43.287 "min_cntlid": 65520, 00:07:43.287 "method": "nvmf_create_subsystem", 00:07:43.287 "req_id": 1 00:07:43.287 } 00:07:43.287 Got JSON-RPC error response 00:07:43.287 response: 00:07:43.287 { 00:07:43.287 "code": -32602, 00:07:43.287 "message": "Invalid cntlid range [65520-65519]" 00:07:43.287 }' 00:07:43.287 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:43.287 { 00:07:43.287 "nqn": "nqn.2016-06.io.spdk:cnode12933", 00:07:43.287 "min_cntlid": 65520, 00:07:43.287 "method": "nvmf_create_subsystem", 00:07:43.287 "req_id": 1 00:07:43.287 } 00:07:43.287 Got JSON-RPC error response 00:07:43.287 response: 00:07:43.287 { 00:07:43.287 "code": -32602, 00:07:43.287 "message": "Invalid cntlid range [65520-65519]" 00:07:43.287 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:43.287 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3271 -I 0 00:07:43.545 [2024-05-15 02:24:30.926051] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3271: invalid cntlid range [1-0] 00:07:43.545 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:43.545 { 00:07:43.545 "nqn": "nqn.2016-06.io.spdk:cnode3271", 00:07:43.545 "max_cntlid": 0, 00:07:43.545 "method": "nvmf_create_subsystem", 00:07:43.545 "req_id": 1 00:07:43.545 } 00:07:43.545 Got JSON-RPC error response 00:07:43.545 response: 00:07:43.545 { 00:07:43.545 "code": -32602, 00:07:43.545 "message": "Invalid cntlid range [1-0]" 00:07:43.545 }' 00:07:43.545 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:43.545 { 00:07:43.545 "nqn": "nqn.2016-06.io.spdk:cnode3271", 00:07:43.545 "max_cntlid": 0, 00:07:43.545 "method": "nvmf_create_subsystem", 00:07:43.545 "req_id": 1 00:07:43.545 } 00:07:43.545 Got JSON-RPC error response 00:07:43.545 response: 00:07:43.545 { 00:07:43.545 "code": -32602, 00:07:43.545 "message": "Invalid cntlid range [1-0]" 00:07:43.545 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:43.545 02:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32261 -I 65520 00:07:43.803 [2024-05-15 02:24:31.174886] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32261: invalid cntlid range [1-65520] 00:07:43.803 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:43.803 { 00:07:43.803 "nqn": "nqn.2016-06.io.spdk:cnode32261", 00:07:43.803 "max_cntlid": 65520, 00:07:43.803 "method": "nvmf_create_subsystem", 00:07:43.803 "req_id": 1 00:07:43.803 } 00:07:43.803 Got JSON-RPC error response 00:07:43.803 response: 00:07:43.803 { 00:07:43.804 "code": -32602, 00:07:43.804 "message": "Invalid cntlid range [1-65520]" 00:07:43.804 }' 00:07:43.804 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:43.804 { 00:07:43.804 "nqn": "nqn.2016-06.io.spdk:cnode32261", 00:07:43.804 "max_cntlid": 65520, 00:07:43.804 "method": "nvmf_create_subsystem", 00:07:43.804 "req_id": 1 00:07:43.804 } 00:07:43.804 Got JSON-RPC error response 00:07:43.804 response: 00:07:43.804 { 00:07:43.804 "code": -32602, 00:07:43.804 "message": "Invalid cntlid range [1-65520]" 00:07:43.804 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:43.804 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31412 -i 6 -I 5 00:07:44.062 [2024-05-15 02:24:31.415704] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31412: invalid cntlid range [6-5] 00:07:44.062 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:44.062 { 00:07:44.062 "nqn": "nqn.2016-06.io.spdk:cnode31412", 00:07:44.062 "min_cntlid": 6, 00:07:44.062 "max_cntlid": 5, 00:07:44.062 "method": "nvmf_create_subsystem", 00:07:44.062 "req_id": 1 00:07:44.062 } 00:07:44.062 Got JSON-RPC error response 00:07:44.062 response: 00:07:44.062 { 00:07:44.062 "code": -32602, 00:07:44.062 "message": "Invalid cntlid range [6-5]" 00:07:44.062 }' 00:07:44.062 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:44.062 { 00:07:44.062 "nqn": "nqn.2016-06.io.spdk:cnode31412", 00:07:44.062 "min_cntlid": 6, 00:07:44.062 "max_cntlid": 5, 00:07:44.062 "method": "nvmf_create_subsystem", 00:07:44.062 "req_id": 1 00:07:44.062 } 00:07:44.062 Got JSON-RPC error response 00:07:44.062 response: 00:07:44.062 { 00:07:44.062 "code": -32602, 00:07:44.062 "message": "Invalid cntlid range [6-5]" 00:07:44.062 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:44.062 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:44.320 { 00:07:44.320 "name": "foobar", 00:07:44.320 "method": "nvmf_delete_target", 00:07:44.320 "req_id": 1 00:07:44.320 } 00:07:44.320 Got JSON-RPC error response 00:07:44.320 response: 00:07:44.320 { 00:07:44.320 "code": -32602, 00:07:44.320 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:44.320 }' 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:44.320 { 00:07:44.320 "name": "foobar", 00:07:44.320 "method": "nvmf_delete_target", 00:07:44.320 "req_id": 1 00:07:44.320 } 00:07:44.320 Got JSON-RPC error response 00:07:44.320 response: 00:07:44.320 { 00:07:44.320 "code": -32602, 00:07:44.320 "message": "The specified target doesn't exist, cannot delete it." 00:07:44.320 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.320 rmmod nvme_tcp 00:07:44.320 rmmod nvme_fabrics 00:07:44.320 rmmod nvme_keyring 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2223434 ']' 00:07:44.320 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2223434 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 2223434 ']' 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 2223434 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2223434 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2223434' 00:07:44.321 killing process with pid 2223434 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 2223434 00:07:44.321 [2024-05-15 02:24:31.641550] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:44.321 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 2223434 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.580 02:24:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.117 02:24:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.117 00:07:47.117 real 0m9.133s 00:07:47.117 user 0m20.493s 00:07:47.117 sys 0m2.685s 00:07:47.117 02:24:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.117 02:24:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:47.117 ************************************ 00:07:47.117 END TEST nvmf_invalid 00:07:47.117 ************************************ 00:07:47.117 02:24:33 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:47.117 02:24:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:47.117 02:24:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.117 02:24:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.117 ************************************ 00:07:47.117 START TEST nvmf_abort 00:07:47.117 ************************************ 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:47.117 * Looking for test storage... 00:07:47.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.117 02:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.650 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:07:49.651 00:07:49.651 --- 10.0.0.2 ping statistics --- 00:07:49.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.651 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:07:49.651 00:07:49.651 --- 10.0.0.1 ping statistics --- 00:07:49.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.651 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2226477 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2226477 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2226477 ']' 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.651 02:24:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.651 [2024-05-15 02:24:36.779095] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:07:49.651 [2024-05-15 02:24:36.779185] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.651 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.651 [2024-05-15 02:24:36.854635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.651 [2024-05-15 02:24:36.964499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.651 [2024-05-15 02:24:36.964553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.651 [2024-05-15 02:24:36.964582] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.651 [2024-05-15 02:24:36.964593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.651 [2024-05-15 02:24:36.964602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.651 [2024-05-15 02:24:36.964686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.651 [2024-05-15 02:24:36.964749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.651 [2024-05-15 02:24:36.964752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 [2024-05-15 02:24:37.115036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 Malloc0 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 Delay0 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 [2024-05-15 02:24:37.180467] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:49.910 [2024-05-15 02:24:37.180782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.910 02:24:37 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:49.910 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.910 [2024-05-15 02:24:37.246740] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:52.438 Initializing NVMe Controllers 00:07:52.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:52.438 controller IO queue size 128 less than required 00:07:52.438 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:52.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:52.438 Initialization complete. Launching workers. 00:07:52.438 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32660 00:07:52.438 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32721, failed to submit 62 00:07:52.438 success 32664, unsuccess 57, failed 0 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.438 rmmod nvme_tcp 00:07:52.438 rmmod nvme_fabrics 00:07:52.438 rmmod nvme_keyring 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2226477 ']' 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2226477 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2226477 ']' 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2226477 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2226477 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2226477' 00:07:52.438 killing process with pid 2226477 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2226477 00:07:52.438 [2024-05-15 02:24:39.505724] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2226477 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.438 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.439 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.439 02:24:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.439 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.439 02:24:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.004 02:24:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.004 00:07:55.004 real 0m7.836s 00:07:55.004 user 0m10.801s 00:07:55.004 sys 0m2.955s 00:07:55.004 02:24:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.004 02:24:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:55.004 ************************************ 00:07:55.004 END TEST nvmf_abort 00:07:55.004 ************************************ 00:07:55.004 02:24:41 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:55.004 02:24:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:55.004 02:24:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.004 02:24:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.004 ************************************ 00:07:55.004 START TEST nvmf_ns_hotplug_stress 00:07:55.004 ************************************ 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:55.004 * Looking for test storage... 00:07:55.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.004 02:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.004 02:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.004 02:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.538 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:57.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:57.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:57.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:57.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:07:57.539 00:07:57.539 --- 10.0.0.2 ping statistics --- 00:07:57.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.539 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:07:57.539 00:07:57.539 --- 10.0.0.1 ping statistics --- 00:07:57.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.539 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2229115 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2229115 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2229115 ']' 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.539 02:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:57.539 [2024-05-15 02:24:44.740647] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:07:57.539 [2024-05-15 02:24:44.740734] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.539 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.539 [2024-05-15 02:24:44.824004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.539 [2024-05-15 02:24:44.944650] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.539 [2024-05-15 02:24:44.944733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.539 [2024-05-15 02:24:44.944750] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.539 [2024-05-15 02:24:44.944763] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.540 [2024-05-15 02:24:44.944775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.540 [2024-05-15 02:24:44.944859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.540 [2024-05-15 02:24:44.944942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.540 [2024-05-15 02:24:44.944939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:58.474 02:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.732 [2024-05-15 02:24:46.016977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.732 02:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:58.990 02:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.248 [2024-05-15 02:24:46.519507] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:59.248 [2024-05-15 02:24:46.519768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.248 02:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.506 02:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:59.764 Malloc0 00:07:59.764 02:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.021 Delay0 00:08:00.022 02:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.279 02:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:00.537 NULL1 00:08:00.537 02:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:00.795 02:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2229542 00:08:00.795 02:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:00.795 02:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:00.795 02:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.795 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.167 Read completed with error (sct=0, sc=11) 00:08:02.167 02:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.425 02:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:02.425 02:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:02.425 true 00:08:02.683 02:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:02.683 02:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.249 02:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.506 02:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:03.506 02:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:03.764 true 00:08:03.764 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:03.764 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.022 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.280 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:04.280 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:04.538 true 00:08:04.538 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:04.538 02:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.796 02:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.053 02:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:05.053 02:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:05.311 true 00:08:05.311 02:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:05.311 02:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 02:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.691 02:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:06.691 02:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:06.949 true 00:08:06.949 02:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:06.949 02:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.881 02:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.881 02:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:07.881 02:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:08.138 true 00:08:08.138 02:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:08.138 02:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.395 02:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.652 02:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:08.652 02:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:08.909 true 00:08:08.909 02:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:08.909 02:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.888 02:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.145 02:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:10.145 02:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:10.402 true 00:08:10.402 02:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:10.402 02:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.660 02:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.917 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:10.917 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:11.176 true 00:08:11.176 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:11.176 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.434 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.692 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:11.692 02:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:11.950 true 00:08:11.950 02:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:11.950 02:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.884 02:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.142 02:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:13.142 02:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:13.399 true 00:08:13.399 02:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:13.399 02:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.656 02:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.915 02:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:13.915 02:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:14.172 true 00:08:14.172 02:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:14.172 02:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.106 02:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.106 02:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:15.106 02:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:15.365 true 00:08:15.365 02:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:15.365 02:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.623 02:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.881 02:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:15.881 02:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:16.139 true 00:08:16.139 02:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:16.139 02:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.073 02:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.331 02:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:17.331 02:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:17.589 true 00:08:17.589 02:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:17.589 02:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.846 02:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.104 02:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:18.104 02:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:18.361 true 00:08:18.361 02:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:18.361 02:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.294 02:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.552 02:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:19.552 02:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:19.810 true 00:08:19.810 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:19.810 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.068 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.326 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:20.326 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:20.584 true 00:08:20.584 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:20.584 02:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.842 02:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.099 02:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:21.099 02:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:21.357 true 00:08:21.357 02:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:21.357 02:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.290 02:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.548 02:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:22.548 02:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:22.831 true 00:08:23.094 02:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:23.094 02:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.659 02:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.941 02:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:23.941 02:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:24.197 true 00:08:24.197 02:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:24.197 02:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.453 02:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.710 02:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:24.710 02:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:24.967 true 00:08:24.967 02:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:24.967 02:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.899 02:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.161 02:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:26.161 02:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:26.419 true 00:08:26.419 02:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:26.419 02:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.675 02:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.933 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:26.933 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:27.191 true 00:08:27.191 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:27.191 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.450 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.708 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:27.708 02:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:27.708 true 00:08:27.708 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:27.708 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.076 02:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.076 02:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:29.076 02:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:29.334 true 00:08:29.334 02:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:29.334 02:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.591 02:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.848 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:29.848 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:30.104 true 00:08:30.105 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:30.105 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.036 Initializing NVMe Controllers 00:08:31.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.036 Controller IO queue size 128, less than required. 00:08:31.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.036 Controller IO queue size 128, less than required. 00:08:31.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:31.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:31.036 Initialization complete. Launching workers. 00:08:31.036 ======================================================== 00:08:31.036 Latency(us) 00:08:31.037 Device Information : IOPS MiB/s Average min max 00:08:31.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1093.16 0.53 61767.35 2431.54 1023404.36 00:08:31.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11036.51 5.39 11597.83 2801.58 452418.53 00:08:31.037 ======================================================== 00:08:31.037 Total : 12129.67 5.92 16119.26 2431.54 1023404.36 00:08:31.037 00:08:31.037 02:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.602 02:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:31.602 02:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:31.602 true 00:08:31.602 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2229542 00:08:31.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2229542) - No such process 00:08:31.602 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2229542 00:08:31.602 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.860 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.118 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:32.118 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:32.118 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:32.118 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.118 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:32.375 null0 00:08:32.375 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.375 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.375 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:32.634 null1 00:08:32.634 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.634 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.634 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:32.892 null2 00:08:32.892 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.892 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.892 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:33.150 null3 00:08:33.150 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.150 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.150 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:33.408 null4 00:08:33.408 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.408 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.408 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:33.665 null5 00:08:33.665 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.665 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.665 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:33.921 null6 00:08:33.921 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.921 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.921 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:34.180 null7 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.180 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2233597 2233598 2233600 2233602 2233604 2233606 2233608 2233610 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.181 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.440 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.698 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.698 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.698 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.698 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.699 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.957 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.216 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.475 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.475 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.475 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.475 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.734 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.734 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.734 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.734 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.734 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.734 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.734 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.734 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.734 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.734 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.008 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.299 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.557 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.815 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.815 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.815 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.815 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.815 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.815 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.073 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.074 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.332 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.591 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.850 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.108 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.367 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.626 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.885 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.143 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.144 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.144 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.402 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.660 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.660 rmmod nvme_tcp 00:08:39.660 rmmod nvme_fabrics 00:08:39.660 rmmod nvme_keyring 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2229115 ']' 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2229115 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2229115 ']' 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2229115 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:39.660 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2229115 00:08:39.917 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:39.917 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:39.917 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2229115' 00:08:39.917 killing process with pid 2229115 00:08:39.917 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2229115 00:08:39.917 [2024-05-15 02:25:27.088776] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:39.917 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2229115 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.176 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.076 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.076 00:08:42.076 real 0m47.507s 00:08:42.076 user 3m32.904s 00:08:42.076 sys 0m16.434s 00:08:42.076 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.076 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.076 ************************************ 00:08:42.076 END TEST nvmf_ns_hotplug_stress 00:08:42.076 ************************************ 00:08:42.076 02:25:29 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:42.076 02:25:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:42.076 02:25:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.076 02:25:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.076 ************************************ 00:08:42.076 START TEST nvmf_connect_stress 00:08:42.076 ************************************ 00:08:42.076 02:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:42.335 * Looking for test storage... 00:08:42.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.335 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.336 02:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.864 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:44.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:44.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:44.865 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:44.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.865 02:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:08:44.865 00:08:44.865 --- 10.0.0.2 ping statistics --- 00:08:44.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.865 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:44.865 00:08:44.865 --- 10.0.0.1 ping statistics --- 00:08:44.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.865 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2236658 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2236658 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2236658 ']' 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:44.865 02:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.865 [2024-05-15 02:25:32.111608] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:08:44.865 [2024-05-15 02:25:32.111683] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.865 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.865 [2024-05-15 02:25:32.186692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.123 [2024-05-15 02:25:32.298746] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.124 [2024-05-15 02:25:32.298796] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.124 [2024-05-15 02:25:32.298809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.124 [2024-05-15 02:25:32.298821] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.124 [2024-05-15 02:25:32.298830] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.124 [2024-05-15 02:25:32.298915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.124 [2024-05-15 02:25:32.298980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.124 [2024-05-15 02:25:32.298985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.689 [2024-05-15 02:25:33.081039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.689 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.689 [2024-05-15 02:25:33.097950] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:45.947 [2024-05-15 02:25:33.110048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.947 NULL1 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2236812 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.947 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.205 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.205 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:46.205 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.205 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.205 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.463 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.463 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:46.463 02:25:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.463 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.463 02:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.720 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.720 02:25:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:46.720 02:25:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.720 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.720 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 02:25:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:47.286 02:25:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.286 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.544 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.544 02:25:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:47.544 02:25:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.544 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.544 02:25:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.802 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.802 02:25:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:47.802 02:25:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.802 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.802 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.060 02:25:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:48.060 02:25:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.060 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.060 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.624 02:25:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:48.624 02:25:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.624 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.624 02:25:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.882 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.882 02:25:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:48.882 02:25:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.882 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.882 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.139 02:25:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:49.139 02:25:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.139 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.139 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.397 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.397 02:25:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:49.397 02:25:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.397 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.397 02:25:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.660 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.660 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:49.660 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.660 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.660 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.961 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.961 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:49.961 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.961 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.961 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.527 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.527 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:50.527 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.527 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.527 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.785 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:50.785 02:25:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.785 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.785 02:25:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.043 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.043 02:25:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:51.043 02:25:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.043 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.043 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.301 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.301 02:25:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:51.301 02:25:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.301 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.301 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.558 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.558 02:25:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:51.558 02:25:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.558 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.558 02:25:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.137 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.137 02:25:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:52.137 02:25:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.137 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.137 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.395 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.395 02:25:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:52.395 02:25:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.395 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.395 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.652 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.652 02:25:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:52.652 02:25:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.652 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.652 02:25:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.910 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.910 02:25:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:52.910 02:25:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.910 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.910 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.169 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.169 02:25:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:53.169 02:25:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.169 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.169 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.736 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.736 02:25:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:53.736 02:25:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.736 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.736 02:25:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.994 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.994 02:25:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:53.994 02:25:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.994 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.994 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.252 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.252 02:25:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:54.252 02:25:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.252 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.252 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.510 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.510 02:25:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:54.510 02:25:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.510 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.510 02:25:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.767 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.767 02:25:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:54.767 02:25:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.767 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.767 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.333 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.333 02:25:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:55.333 02:25:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.333 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.333 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.591 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.591 02:25:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:55.591 02:25:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.591 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.591 02:25:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.849 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.849 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:55.849 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.849 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.849 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.107 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2236812 00:08:56.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2236812) - No such process 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2236812 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.107 rmmod nvme_tcp 00:08:56.107 rmmod nvme_fabrics 00:08:56.107 rmmod nvme_keyring 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2236658 ']' 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2236658 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2236658 ']' 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2236658 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:56.107 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2236658 00:08:56.366 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:56.366 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:56.366 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2236658' 00:08:56.366 killing process with pid 2236658 00:08:56.366 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2236658 00:08:56.366 [2024-05-15 02:25:43.535090] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:56.366 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2236658 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.625 02:25:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.533 02:25:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.533 00:08:58.533 real 0m16.380s 00:08:58.533 user 0m40.343s 00:08:58.533 sys 0m6.299s 00:08:58.533 02:25:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:58.533 02:25:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.533 ************************************ 00:08:58.533 END TEST nvmf_connect_stress 00:08:58.533 ************************************ 00:08:58.533 02:25:45 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:58.533 02:25:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:58.533 02:25:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.533 02:25:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.533 ************************************ 00:08:58.533 START TEST nvmf_fused_ordering 00:08:58.533 ************************************ 00:08:58.533 02:25:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:58.792 * Looking for test storage... 00:08:58.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.792 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.793 02:25:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.325 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:01.326 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:01.326 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:01.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:01.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:09:01.326 00:09:01.326 --- 10.0.0.2 ping statistics --- 00:09:01.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.326 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:09:01.326 00:09:01.326 --- 10.0.0.1 ping statistics --- 00:09:01.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.326 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2240378 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2240378 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2240378 ']' 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:01.326 02:25:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.326 [2024-05-15 02:25:48.662603] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:01.326 [2024-05-15 02:25:48.662688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.326 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.585 [2024-05-15 02:25:48.745034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.585 [2024-05-15 02:25:48.860043] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.585 [2024-05-15 02:25:48.860107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.585 [2024-05-15 02:25:48.860133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.585 [2024-05-15 02:25:48.860147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.585 [2024-05-15 02:25:48.860159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.585 [2024-05-15 02:25:48.860189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 [2024-05-15 02:25:49.620048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 [2024-05-15 02:25:49.635989] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:02.519 [2024-05-15 02:25:49.636267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 NULL1 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.519 02:25:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:02.519 [2024-05-15 02:25:49.682036] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:02.519 [2024-05-15 02:25:49.682079] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240529 ] 00:09:02.519 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.086 Attached to nqn.2016-06.io.spdk:cnode1 00:09:03.086 Namespace ID: 1 size: 1GB 00:09:03.086 fused_ordering(0) 00:09:03.086 fused_ordering(1) 00:09:03.086 fused_ordering(2) 00:09:03.086 fused_ordering(3) 00:09:03.086 fused_ordering(4) 00:09:03.086 fused_ordering(5) 00:09:03.086 fused_ordering(6) 00:09:03.086 fused_ordering(7) 00:09:03.086 fused_ordering(8) 00:09:03.086 fused_ordering(9) 00:09:03.086 fused_ordering(10) 00:09:03.086 fused_ordering(11) 00:09:03.086 fused_ordering(12) 00:09:03.086 fused_ordering(13) 00:09:03.086 fused_ordering(14) 00:09:03.086 fused_ordering(15) 00:09:03.086 fused_ordering(16) 00:09:03.086 fused_ordering(17) 00:09:03.086 fused_ordering(18) 00:09:03.086 fused_ordering(19) 00:09:03.086 fused_ordering(20) 00:09:03.086 fused_ordering(21) 00:09:03.086 fused_ordering(22) 00:09:03.086 fused_ordering(23) 00:09:03.086 fused_ordering(24) 00:09:03.086 fused_ordering(25) 00:09:03.086 fused_ordering(26) 00:09:03.086 fused_ordering(27) 00:09:03.086 fused_ordering(28) 00:09:03.086 fused_ordering(29) 00:09:03.086 fused_ordering(30) 00:09:03.086 fused_ordering(31) 00:09:03.086 fused_ordering(32) 00:09:03.086 fused_ordering(33) 00:09:03.086 fused_ordering(34) 00:09:03.086 fused_ordering(35) 00:09:03.086 fused_ordering(36) 00:09:03.086 fused_ordering(37) 00:09:03.086 fused_ordering(38) 00:09:03.086 fused_ordering(39) 00:09:03.086 fused_ordering(40) 00:09:03.086 fused_ordering(41) 00:09:03.086 fused_ordering(42) 00:09:03.086 fused_ordering(43) 00:09:03.086 fused_ordering(44) 00:09:03.086 fused_ordering(45) 00:09:03.086 fused_ordering(46) 00:09:03.086 fused_ordering(47) 00:09:03.086 fused_ordering(48) 00:09:03.086 fused_ordering(49) 00:09:03.086 fused_ordering(50) 00:09:03.086 fused_ordering(51) 00:09:03.086 fused_ordering(52) 00:09:03.086 fused_ordering(53) 00:09:03.086 fused_ordering(54) 00:09:03.086 fused_ordering(55) 00:09:03.087 fused_ordering(56) 00:09:03.087 fused_ordering(57) 00:09:03.087 fused_ordering(58) 00:09:03.087 fused_ordering(59) 00:09:03.087 fused_ordering(60) 00:09:03.087 fused_ordering(61) 00:09:03.087 fused_ordering(62) 00:09:03.087 fused_ordering(63) 00:09:03.087 fused_ordering(64) 00:09:03.087 fused_ordering(65) 00:09:03.087 fused_ordering(66) 00:09:03.087 fused_ordering(67) 00:09:03.087 fused_ordering(68) 00:09:03.087 fused_ordering(69) 00:09:03.087 fused_ordering(70) 00:09:03.087 fused_ordering(71) 00:09:03.087 fused_ordering(72) 00:09:03.087 fused_ordering(73) 00:09:03.087 fused_ordering(74) 00:09:03.087 fused_ordering(75) 00:09:03.087 fused_ordering(76) 00:09:03.087 fused_ordering(77) 00:09:03.087 fused_ordering(78) 00:09:03.087 fused_ordering(79) 00:09:03.087 fused_ordering(80) 00:09:03.087 fused_ordering(81) 00:09:03.087 fused_ordering(82) 00:09:03.087 fused_ordering(83) 00:09:03.087 fused_ordering(84) 00:09:03.087 fused_ordering(85) 00:09:03.087 fused_ordering(86) 00:09:03.087 fused_ordering(87) 00:09:03.087 fused_ordering(88) 00:09:03.087 fused_ordering(89) 00:09:03.087 fused_ordering(90) 00:09:03.087 fused_ordering(91) 00:09:03.087 fused_ordering(92) 00:09:03.087 fused_ordering(93) 00:09:03.087 fused_ordering(94) 00:09:03.087 fused_ordering(95) 00:09:03.087 fused_ordering(96) 00:09:03.087 fused_ordering(97) 00:09:03.087 fused_ordering(98) 00:09:03.087 fused_ordering(99) 00:09:03.087 fused_ordering(100) 00:09:03.087 fused_ordering(101) 00:09:03.087 fused_ordering(102) 00:09:03.087 fused_ordering(103) 00:09:03.087 fused_ordering(104) 00:09:03.087 fused_ordering(105) 00:09:03.087 fused_ordering(106) 00:09:03.087 fused_ordering(107) 00:09:03.087 fused_ordering(108) 00:09:03.087 fused_ordering(109) 00:09:03.087 fused_ordering(110) 00:09:03.087 fused_ordering(111) 00:09:03.087 fused_ordering(112) 00:09:03.087 fused_ordering(113) 00:09:03.087 fused_ordering(114) 00:09:03.087 fused_ordering(115) 00:09:03.087 fused_ordering(116) 00:09:03.087 fused_ordering(117) 00:09:03.087 fused_ordering(118) 00:09:03.087 fused_ordering(119) 00:09:03.087 fused_ordering(120) 00:09:03.087 fused_ordering(121) 00:09:03.087 fused_ordering(122) 00:09:03.087 fused_ordering(123) 00:09:03.087 fused_ordering(124) 00:09:03.087 fused_ordering(125) 00:09:03.087 fused_ordering(126) 00:09:03.087 fused_ordering(127) 00:09:03.087 fused_ordering(128) 00:09:03.087 fused_ordering(129) 00:09:03.087 fused_ordering(130) 00:09:03.087 fused_ordering(131) 00:09:03.087 fused_ordering(132) 00:09:03.087 fused_ordering(133) 00:09:03.087 fused_ordering(134) 00:09:03.087 fused_ordering(135) 00:09:03.087 fused_ordering(136) 00:09:03.087 fused_ordering(137) 00:09:03.087 fused_ordering(138) 00:09:03.087 fused_ordering(139) 00:09:03.087 fused_ordering(140) 00:09:03.087 fused_ordering(141) 00:09:03.087 fused_ordering(142) 00:09:03.087 fused_ordering(143) 00:09:03.087 fused_ordering(144) 00:09:03.087 fused_ordering(145) 00:09:03.087 fused_ordering(146) 00:09:03.087 fused_ordering(147) 00:09:03.087 fused_ordering(148) 00:09:03.087 fused_ordering(149) 00:09:03.087 fused_ordering(150) 00:09:03.087 fused_ordering(151) 00:09:03.087 fused_ordering(152) 00:09:03.087 fused_ordering(153) 00:09:03.087 fused_ordering(154) 00:09:03.087 fused_ordering(155) 00:09:03.087 fused_ordering(156) 00:09:03.087 fused_ordering(157) 00:09:03.087 fused_ordering(158) 00:09:03.087 fused_ordering(159) 00:09:03.087 fused_ordering(160) 00:09:03.087 fused_ordering(161) 00:09:03.087 fused_ordering(162) 00:09:03.087 fused_ordering(163) 00:09:03.087 fused_ordering(164) 00:09:03.087 fused_ordering(165) 00:09:03.087 fused_ordering(166) 00:09:03.087 fused_ordering(167) 00:09:03.087 fused_ordering(168) 00:09:03.087 fused_ordering(169) 00:09:03.087 fused_ordering(170) 00:09:03.087 fused_ordering(171) 00:09:03.087 fused_ordering(172) 00:09:03.087 fused_ordering(173) 00:09:03.087 fused_ordering(174) 00:09:03.087 fused_ordering(175) 00:09:03.087 fused_ordering(176) 00:09:03.087 fused_ordering(177) 00:09:03.087 fused_ordering(178) 00:09:03.087 fused_ordering(179) 00:09:03.087 fused_ordering(180) 00:09:03.087 fused_ordering(181) 00:09:03.087 fused_ordering(182) 00:09:03.087 fused_ordering(183) 00:09:03.087 fused_ordering(184) 00:09:03.087 fused_ordering(185) 00:09:03.087 fused_ordering(186) 00:09:03.087 fused_ordering(187) 00:09:03.087 fused_ordering(188) 00:09:03.087 fused_ordering(189) 00:09:03.087 fused_ordering(190) 00:09:03.087 fused_ordering(191) 00:09:03.087 fused_ordering(192) 00:09:03.087 fused_ordering(193) 00:09:03.087 fused_ordering(194) 00:09:03.087 fused_ordering(195) 00:09:03.087 fused_ordering(196) 00:09:03.087 fused_ordering(197) 00:09:03.087 fused_ordering(198) 00:09:03.087 fused_ordering(199) 00:09:03.087 fused_ordering(200) 00:09:03.087 fused_ordering(201) 00:09:03.087 fused_ordering(202) 00:09:03.087 fused_ordering(203) 00:09:03.087 fused_ordering(204) 00:09:03.087 fused_ordering(205) 00:09:04.022 fused_ordering(206) 00:09:04.022 fused_ordering(207) 00:09:04.022 fused_ordering(208) 00:09:04.022 fused_ordering(209) 00:09:04.022 fused_ordering(210) 00:09:04.022 fused_ordering(211) 00:09:04.022 fused_ordering(212) 00:09:04.022 fused_ordering(213) 00:09:04.022 fused_ordering(214) 00:09:04.022 fused_ordering(215) 00:09:04.022 fused_ordering(216) 00:09:04.022 fused_ordering(217) 00:09:04.022 fused_ordering(218) 00:09:04.022 fused_ordering(219) 00:09:04.022 fused_ordering(220) 00:09:04.022 fused_ordering(221) 00:09:04.022 fused_ordering(222) 00:09:04.022 fused_ordering(223) 00:09:04.022 fused_ordering(224) 00:09:04.022 fused_ordering(225) 00:09:04.022 fused_ordering(226) 00:09:04.022 fused_ordering(227) 00:09:04.022 fused_ordering(228) 00:09:04.022 fused_ordering(229) 00:09:04.022 fused_ordering(230) 00:09:04.022 fused_ordering(231) 00:09:04.022 fused_ordering(232) 00:09:04.022 fused_ordering(233) 00:09:04.022 fused_ordering(234) 00:09:04.022 fused_ordering(235) 00:09:04.022 fused_ordering(236) 00:09:04.022 fused_ordering(237) 00:09:04.022 fused_ordering(238) 00:09:04.022 fused_ordering(239) 00:09:04.022 fused_ordering(240) 00:09:04.022 fused_ordering(241) 00:09:04.022 fused_ordering(242) 00:09:04.022 fused_ordering(243) 00:09:04.022 fused_ordering(244) 00:09:04.022 fused_ordering(245) 00:09:04.022 fused_ordering(246) 00:09:04.022 fused_ordering(247) 00:09:04.022 fused_ordering(248) 00:09:04.022 fused_ordering(249) 00:09:04.022 fused_ordering(250) 00:09:04.022 fused_ordering(251) 00:09:04.022 fused_ordering(252) 00:09:04.022 fused_ordering(253) 00:09:04.022 fused_ordering(254) 00:09:04.022 fused_ordering(255) 00:09:04.022 fused_ordering(256) 00:09:04.022 fused_ordering(257) 00:09:04.022 fused_ordering(258) 00:09:04.022 fused_ordering(259) 00:09:04.022 fused_ordering(260) 00:09:04.022 fused_ordering(261) 00:09:04.022 fused_ordering(262) 00:09:04.022 fused_ordering(263) 00:09:04.022 fused_ordering(264) 00:09:04.022 fused_ordering(265) 00:09:04.022 fused_ordering(266) 00:09:04.022 fused_ordering(267) 00:09:04.022 fused_ordering(268) 00:09:04.022 fused_ordering(269) 00:09:04.022 fused_ordering(270) 00:09:04.022 fused_ordering(271) 00:09:04.022 fused_ordering(272) 00:09:04.022 fused_ordering(273) 00:09:04.022 fused_ordering(274) 00:09:04.022 fused_ordering(275) 00:09:04.022 fused_ordering(276) 00:09:04.022 fused_ordering(277) 00:09:04.022 fused_ordering(278) 00:09:04.022 fused_ordering(279) 00:09:04.022 fused_ordering(280) 00:09:04.022 fused_ordering(281) 00:09:04.022 fused_ordering(282) 00:09:04.022 fused_ordering(283) 00:09:04.022 fused_ordering(284) 00:09:04.022 fused_ordering(285) 00:09:04.022 fused_ordering(286) 00:09:04.022 fused_ordering(287) 00:09:04.022 fused_ordering(288) 00:09:04.022 fused_ordering(289) 00:09:04.022 fused_ordering(290) 00:09:04.022 fused_ordering(291) 00:09:04.022 fused_ordering(292) 00:09:04.022 fused_ordering(293) 00:09:04.022 fused_ordering(294) 00:09:04.022 fused_ordering(295) 00:09:04.022 fused_ordering(296) 00:09:04.022 fused_ordering(297) 00:09:04.022 fused_ordering(298) 00:09:04.022 fused_ordering(299) 00:09:04.022 fused_ordering(300) 00:09:04.022 fused_ordering(301) 00:09:04.022 fused_ordering(302) 00:09:04.022 fused_ordering(303) 00:09:04.022 fused_ordering(304) 00:09:04.022 fused_ordering(305) 00:09:04.022 fused_ordering(306) 00:09:04.022 fused_ordering(307) 00:09:04.022 fused_ordering(308) 00:09:04.022 fused_ordering(309) 00:09:04.022 fused_ordering(310) 00:09:04.022 fused_ordering(311) 00:09:04.022 fused_ordering(312) 00:09:04.022 fused_ordering(313) 00:09:04.022 fused_ordering(314) 00:09:04.022 fused_ordering(315) 00:09:04.022 fused_ordering(316) 00:09:04.022 fused_ordering(317) 00:09:04.022 fused_ordering(318) 00:09:04.022 fused_ordering(319) 00:09:04.022 fused_ordering(320) 00:09:04.022 fused_ordering(321) 00:09:04.022 fused_ordering(322) 00:09:04.022 fused_ordering(323) 00:09:04.022 fused_ordering(324) 00:09:04.022 fused_ordering(325) 00:09:04.022 fused_ordering(326) 00:09:04.022 fused_ordering(327) 00:09:04.022 fused_ordering(328) 00:09:04.022 fused_ordering(329) 00:09:04.022 fused_ordering(330) 00:09:04.022 fused_ordering(331) 00:09:04.022 fused_ordering(332) 00:09:04.022 fused_ordering(333) 00:09:04.022 fused_ordering(334) 00:09:04.022 fused_ordering(335) 00:09:04.022 fused_ordering(336) 00:09:04.022 fused_ordering(337) 00:09:04.022 fused_ordering(338) 00:09:04.022 fused_ordering(339) 00:09:04.022 fused_ordering(340) 00:09:04.022 fused_ordering(341) 00:09:04.022 fused_ordering(342) 00:09:04.022 fused_ordering(343) 00:09:04.022 fused_ordering(344) 00:09:04.022 fused_ordering(345) 00:09:04.022 fused_ordering(346) 00:09:04.022 fused_ordering(347) 00:09:04.022 fused_ordering(348) 00:09:04.022 fused_ordering(349) 00:09:04.022 fused_ordering(350) 00:09:04.022 fused_ordering(351) 00:09:04.022 fused_ordering(352) 00:09:04.022 fused_ordering(353) 00:09:04.022 fused_ordering(354) 00:09:04.022 fused_ordering(355) 00:09:04.022 fused_ordering(356) 00:09:04.022 fused_ordering(357) 00:09:04.022 fused_ordering(358) 00:09:04.022 fused_ordering(359) 00:09:04.022 fused_ordering(360) 00:09:04.022 fused_ordering(361) 00:09:04.022 fused_ordering(362) 00:09:04.022 fused_ordering(363) 00:09:04.022 fused_ordering(364) 00:09:04.022 fused_ordering(365) 00:09:04.022 fused_ordering(366) 00:09:04.022 fused_ordering(367) 00:09:04.022 fused_ordering(368) 00:09:04.022 fused_ordering(369) 00:09:04.022 fused_ordering(370) 00:09:04.022 fused_ordering(371) 00:09:04.022 fused_ordering(372) 00:09:04.022 fused_ordering(373) 00:09:04.022 fused_ordering(374) 00:09:04.022 fused_ordering(375) 00:09:04.022 fused_ordering(376) 00:09:04.022 fused_ordering(377) 00:09:04.022 fused_ordering(378) 00:09:04.022 fused_ordering(379) 00:09:04.022 fused_ordering(380) 00:09:04.022 fused_ordering(381) 00:09:04.022 fused_ordering(382) 00:09:04.022 fused_ordering(383) 00:09:04.022 fused_ordering(384) 00:09:04.022 fused_ordering(385) 00:09:04.022 fused_ordering(386) 00:09:04.022 fused_ordering(387) 00:09:04.022 fused_ordering(388) 00:09:04.022 fused_ordering(389) 00:09:04.022 fused_ordering(390) 00:09:04.022 fused_ordering(391) 00:09:04.022 fused_ordering(392) 00:09:04.022 fused_ordering(393) 00:09:04.022 fused_ordering(394) 00:09:04.022 fused_ordering(395) 00:09:04.022 fused_ordering(396) 00:09:04.022 fused_ordering(397) 00:09:04.022 fused_ordering(398) 00:09:04.022 fused_ordering(399) 00:09:04.022 fused_ordering(400) 00:09:04.022 fused_ordering(401) 00:09:04.022 fused_ordering(402) 00:09:04.022 fused_ordering(403) 00:09:04.022 fused_ordering(404) 00:09:04.022 fused_ordering(405) 00:09:04.022 fused_ordering(406) 00:09:04.022 fused_ordering(407) 00:09:04.022 fused_ordering(408) 00:09:04.022 fused_ordering(409) 00:09:04.022 fused_ordering(410) 00:09:05.014 fused_ordering(411) 00:09:05.014 fused_ordering(412) 00:09:05.014 fused_ordering(413) 00:09:05.014 fused_ordering(414) 00:09:05.014 fused_ordering(415) 00:09:05.014 fused_ordering(416) 00:09:05.014 fused_ordering(417) 00:09:05.014 fused_ordering(418) 00:09:05.014 fused_ordering(419) 00:09:05.014 fused_ordering(420) 00:09:05.014 fused_ordering(421) 00:09:05.014 fused_ordering(422) 00:09:05.014 fused_ordering(423) 00:09:05.014 fused_ordering(424) 00:09:05.014 fused_ordering(425) 00:09:05.014 fused_ordering(426) 00:09:05.014 fused_ordering(427) 00:09:05.014 fused_ordering(428) 00:09:05.014 fused_ordering(429) 00:09:05.014 fused_ordering(430) 00:09:05.014 fused_ordering(431) 00:09:05.014 fused_ordering(432) 00:09:05.014 fused_ordering(433) 00:09:05.014 fused_ordering(434) 00:09:05.014 fused_ordering(435) 00:09:05.014 fused_ordering(436) 00:09:05.014 fused_ordering(437) 00:09:05.014 fused_ordering(438) 00:09:05.014 fused_ordering(439) 00:09:05.014 fused_ordering(440) 00:09:05.014 fused_ordering(441) 00:09:05.014 fused_ordering(442) 00:09:05.014 fused_ordering(443) 00:09:05.014 fused_ordering(444) 00:09:05.014 fused_ordering(445) 00:09:05.014 fused_ordering(446) 00:09:05.015 fused_ordering(447) 00:09:05.015 fused_ordering(448) 00:09:05.015 fused_ordering(449) 00:09:05.015 fused_ordering(450) 00:09:05.015 fused_ordering(451) 00:09:05.015 fused_ordering(452) 00:09:05.015 fused_ordering(453) 00:09:05.015 fused_ordering(454) 00:09:05.015 fused_ordering(455) 00:09:05.015 fused_ordering(456) 00:09:05.015 fused_ordering(457) 00:09:05.015 fused_ordering(458) 00:09:05.015 fused_ordering(459) 00:09:05.015 fused_ordering(460) 00:09:05.015 fused_ordering(461) 00:09:05.015 fused_ordering(462) 00:09:05.015 fused_ordering(463) 00:09:05.015 fused_ordering(464) 00:09:05.015 fused_ordering(465) 00:09:05.015 fused_ordering(466) 00:09:05.015 fused_ordering(467) 00:09:05.015 fused_ordering(468) 00:09:05.015 fused_ordering(469) 00:09:05.015 fused_ordering(470) 00:09:05.015 fused_ordering(471) 00:09:05.015 fused_ordering(472) 00:09:05.015 fused_ordering(473) 00:09:05.015 fused_ordering(474) 00:09:05.015 fused_ordering(475) 00:09:05.015 fused_ordering(476) 00:09:05.015 fused_ordering(477) 00:09:05.015 fused_ordering(478) 00:09:05.015 fused_ordering(479) 00:09:05.015 fused_ordering(480) 00:09:05.015 fused_ordering(481) 00:09:05.015 fused_ordering(482) 00:09:05.015 fused_ordering(483) 00:09:05.015 fused_ordering(484) 00:09:05.015 fused_ordering(485) 00:09:05.015 fused_ordering(486) 00:09:05.015 fused_ordering(487) 00:09:05.015 fused_ordering(488) 00:09:05.015 fused_ordering(489) 00:09:05.015 fused_ordering(490) 00:09:05.015 fused_ordering(491) 00:09:05.015 fused_ordering(492) 00:09:05.015 fused_ordering(493) 00:09:05.015 fused_ordering(494) 00:09:05.015 fused_ordering(495) 00:09:05.015 fused_ordering(496) 00:09:05.015 fused_ordering(497) 00:09:05.015 fused_ordering(498) 00:09:05.015 fused_ordering(499) 00:09:05.015 fused_ordering(500) 00:09:05.015 fused_ordering(501) 00:09:05.015 fused_ordering(502) 00:09:05.015 fused_ordering(503) 00:09:05.015 fused_ordering(504) 00:09:05.015 fused_ordering(505) 00:09:05.015 fused_ordering(506) 00:09:05.015 fused_ordering(507) 00:09:05.015 fused_ordering(508) 00:09:05.015 fused_ordering(509) 00:09:05.015 fused_ordering(510) 00:09:05.015 fused_ordering(511) 00:09:05.015 fused_ordering(512) 00:09:05.015 fused_ordering(513) 00:09:05.015 fused_ordering(514) 00:09:05.015 fused_ordering(515) 00:09:05.015 fused_ordering(516) 00:09:05.015 fused_ordering(517) 00:09:05.015 fused_ordering(518) 00:09:05.015 fused_ordering(519) 00:09:05.015 fused_ordering(520) 00:09:05.015 fused_ordering(521) 00:09:05.015 fused_ordering(522) 00:09:05.015 fused_ordering(523) 00:09:05.015 fused_ordering(524) 00:09:05.015 fused_ordering(525) 00:09:05.015 fused_ordering(526) 00:09:05.015 fused_ordering(527) 00:09:05.015 fused_ordering(528) 00:09:05.015 fused_ordering(529) 00:09:05.015 fused_ordering(530) 00:09:05.015 fused_ordering(531) 00:09:05.015 fused_ordering(532) 00:09:05.015 fused_ordering(533) 00:09:05.015 fused_ordering(534) 00:09:05.015 fused_ordering(535) 00:09:05.015 fused_ordering(536) 00:09:05.015 fused_ordering(537) 00:09:05.015 fused_ordering(538) 00:09:05.015 fused_ordering(539) 00:09:05.015 fused_ordering(540) 00:09:05.015 fused_ordering(541) 00:09:05.015 fused_ordering(542) 00:09:05.015 fused_ordering(543) 00:09:05.015 fused_ordering(544) 00:09:05.015 fused_ordering(545) 00:09:05.015 fused_ordering(546) 00:09:05.015 fused_ordering(547) 00:09:05.015 fused_ordering(548) 00:09:05.015 fused_ordering(549) 00:09:05.015 fused_ordering(550) 00:09:05.015 fused_ordering(551) 00:09:05.015 fused_ordering(552) 00:09:05.015 fused_ordering(553) 00:09:05.015 fused_ordering(554) 00:09:05.015 fused_ordering(555) 00:09:05.015 fused_ordering(556) 00:09:05.015 fused_ordering(557) 00:09:05.015 fused_ordering(558) 00:09:05.015 fused_ordering(559) 00:09:05.015 fused_ordering(560) 00:09:05.015 fused_ordering(561) 00:09:05.015 fused_ordering(562) 00:09:05.015 fused_ordering(563) 00:09:05.015 fused_ordering(564) 00:09:05.015 fused_ordering(565) 00:09:05.015 fused_ordering(566) 00:09:05.015 fused_ordering(567) 00:09:05.015 fused_ordering(568) 00:09:05.015 fused_ordering(569) 00:09:05.015 fused_ordering(570) 00:09:05.015 fused_ordering(571) 00:09:05.015 fused_ordering(572) 00:09:05.015 fused_ordering(573) 00:09:05.015 fused_ordering(574) 00:09:05.015 fused_ordering(575) 00:09:05.015 fused_ordering(576) 00:09:05.015 fused_ordering(577) 00:09:05.015 fused_ordering(578) 00:09:05.015 fused_ordering(579) 00:09:05.015 fused_ordering(580) 00:09:05.015 fused_ordering(581) 00:09:05.015 fused_ordering(582) 00:09:05.015 fused_ordering(583) 00:09:05.015 fused_ordering(584) 00:09:05.015 fused_ordering(585) 00:09:05.015 fused_ordering(586) 00:09:05.015 fused_ordering(587) 00:09:05.015 fused_ordering(588) 00:09:05.015 fused_ordering(589) 00:09:05.015 fused_ordering(590) 00:09:05.015 fused_ordering(591) 00:09:05.015 fused_ordering(592) 00:09:05.015 fused_ordering(593) 00:09:05.015 fused_ordering(594) 00:09:05.015 fused_ordering(595) 00:09:05.015 fused_ordering(596) 00:09:05.015 fused_ordering(597) 00:09:05.015 fused_ordering(598) 00:09:05.015 fused_ordering(599) 00:09:05.015 fused_ordering(600) 00:09:05.015 fused_ordering(601) 00:09:05.015 fused_ordering(602) 00:09:05.015 fused_ordering(603) 00:09:05.015 fused_ordering(604) 00:09:05.015 fused_ordering(605) 00:09:05.015 fused_ordering(606) 00:09:05.015 fused_ordering(607) 00:09:05.015 fused_ordering(608) 00:09:05.015 fused_ordering(609) 00:09:05.015 fused_ordering(610) 00:09:05.015 fused_ordering(611) 00:09:05.015 fused_ordering(612) 00:09:05.015 fused_ordering(613) 00:09:05.015 fused_ordering(614) 00:09:05.015 fused_ordering(615) 00:09:05.581 fused_ordering(616) 00:09:05.581 fused_ordering(617) 00:09:05.581 fused_ordering(618) 00:09:05.581 fused_ordering(619) 00:09:05.581 fused_ordering(620) 00:09:05.581 fused_ordering(621) 00:09:05.581 fused_ordering(622) 00:09:05.581 fused_ordering(623) 00:09:05.581 fused_ordering(624) 00:09:05.581 fused_ordering(625) 00:09:05.581 fused_ordering(626) 00:09:05.581 fused_ordering(627) 00:09:05.581 fused_ordering(628) 00:09:05.581 fused_ordering(629) 00:09:05.581 fused_ordering(630) 00:09:05.581 fused_ordering(631) 00:09:05.581 fused_ordering(632) 00:09:05.581 fused_ordering(633) 00:09:05.581 fused_ordering(634) 00:09:05.581 fused_ordering(635) 00:09:05.581 fused_ordering(636) 00:09:05.581 fused_ordering(637) 00:09:05.581 fused_ordering(638) 00:09:05.581 fused_ordering(639) 00:09:05.581 fused_ordering(640) 00:09:05.581 fused_ordering(641) 00:09:05.581 fused_ordering(642) 00:09:05.581 fused_ordering(643) 00:09:05.581 fused_ordering(644) 00:09:05.581 fused_ordering(645) 00:09:05.581 fused_ordering(646) 00:09:05.581 fused_ordering(647) 00:09:05.581 fused_ordering(648) 00:09:05.581 fused_ordering(649) 00:09:05.581 fused_ordering(650) 00:09:05.581 fused_ordering(651) 00:09:05.581 fused_ordering(652) 00:09:05.581 fused_ordering(653) 00:09:05.581 fused_ordering(654) 00:09:05.581 fused_ordering(655) 00:09:05.581 fused_ordering(656) 00:09:05.581 fused_ordering(657) 00:09:05.581 fused_ordering(658) 00:09:05.581 fused_ordering(659) 00:09:05.581 fused_ordering(660) 00:09:05.581 fused_ordering(661) 00:09:05.581 fused_ordering(662) 00:09:05.581 fused_ordering(663) 00:09:05.581 fused_ordering(664) 00:09:05.581 fused_ordering(665) 00:09:05.582 fused_ordering(666) 00:09:05.582 fused_ordering(667) 00:09:05.582 fused_ordering(668) 00:09:05.582 fused_ordering(669) 00:09:05.582 fused_ordering(670) 00:09:05.582 fused_ordering(671) 00:09:05.582 fused_ordering(672) 00:09:05.582 fused_ordering(673) 00:09:05.582 fused_ordering(674) 00:09:05.582 fused_ordering(675) 00:09:05.582 fused_ordering(676) 00:09:05.582 fused_ordering(677) 00:09:05.582 fused_ordering(678) 00:09:05.582 fused_ordering(679) 00:09:05.582 fused_ordering(680) 00:09:05.582 fused_ordering(681) 00:09:05.582 fused_ordering(682) 00:09:05.582 fused_ordering(683) 00:09:05.582 fused_ordering(684) 00:09:05.582 fused_ordering(685) 00:09:05.582 fused_ordering(686) 00:09:05.582 fused_ordering(687) 00:09:05.582 fused_ordering(688) 00:09:05.582 fused_ordering(689) 00:09:05.582 fused_ordering(690) 00:09:05.582 fused_ordering(691) 00:09:05.582 fused_ordering(692) 00:09:05.582 fused_ordering(693) 00:09:05.582 fused_ordering(694) 00:09:05.582 fused_ordering(695) 00:09:05.582 fused_ordering(696) 00:09:05.582 fused_ordering(697) 00:09:05.582 fused_ordering(698) 00:09:05.582 fused_ordering(699) 00:09:05.582 fused_ordering(700) 00:09:05.582 fused_ordering(701) 00:09:05.582 fused_ordering(702) 00:09:05.582 fused_ordering(703) 00:09:05.582 fused_ordering(704) 00:09:05.582 fused_ordering(705) 00:09:05.582 fused_ordering(706) 00:09:05.582 fused_ordering(707) 00:09:05.582 fused_ordering(708) 00:09:05.582 fused_ordering(709) 00:09:05.582 fused_ordering(710) 00:09:05.582 fused_ordering(711) 00:09:05.582 fused_ordering(712) 00:09:05.582 fused_ordering(713) 00:09:05.582 fused_ordering(714) 00:09:05.582 fused_ordering(715) 00:09:05.582 fused_ordering(716) 00:09:05.582 fused_ordering(717) 00:09:05.582 fused_ordering(718) 00:09:05.582 fused_ordering(719) 00:09:05.582 fused_ordering(720) 00:09:05.582 fused_ordering(721) 00:09:05.582 fused_ordering(722) 00:09:05.582 fused_ordering(723) 00:09:05.582 fused_ordering(724) 00:09:05.582 fused_ordering(725) 00:09:05.582 fused_ordering(726) 00:09:05.582 fused_ordering(727) 00:09:05.582 fused_ordering(728) 00:09:05.582 fused_ordering(729) 00:09:05.582 fused_ordering(730) 00:09:05.582 fused_ordering(731) 00:09:05.582 fused_ordering(732) 00:09:05.582 fused_ordering(733) 00:09:05.582 fused_ordering(734) 00:09:05.582 fused_ordering(735) 00:09:05.582 fused_ordering(736) 00:09:05.582 fused_ordering(737) 00:09:05.582 fused_ordering(738) 00:09:05.582 fused_ordering(739) 00:09:05.582 fused_ordering(740) 00:09:05.582 fused_ordering(741) 00:09:05.582 fused_ordering(742) 00:09:05.582 fused_ordering(743) 00:09:05.582 fused_ordering(744) 00:09:05.582 fused_ordering(745) 00:09:05.582 fused_ordering(746) 00:09:05.582 fused_ordering(747) 00:09:05.582 fused_ordering(748) 00:09:05.582 fused_ordering(749) 00:09:05.582 fused_ordering(750) 00:09:05.582 fused_ordering(751) 00:09:05.582 fused_ordering(752) 00:09:05.582 fused_ordering(753) 00:09:05.582 fused_ordering(754) 00:09:05.582 fused_ordering(755) 00:09:05.582 fused_ordering(756) 00:09:05.582 fused_ordering(757) 00:09:05.582 fused_ordering(758) 00:09:05.582 fused_ordering(759) 00:09:05.582 fused_ordering(760) 00:09:05.582 fused_ordering(761) 00:09:05.582 fused_ordering(762) 00:09:05.582 fused_ordering(763) 00:09:05.582 fused_ordering(764) 00:09:05.582 fused_ordering(765) 00:09:05.582 fused_ordering(766) 00:09:05.582 fused_ordering(767) 00:09:05.582 fused_ordering(768) 00:09:05.582 fused_ordering(769) 00:09:05.582 fused_ordering(770) 00:09:05.582 fused_ordering(771) 00:09:05.582 fused_ordering(772) 00:09:05.582 fused_ordering(773) 00:09:05.582 fused_ordering(774) 00:09:05.582 fused_ordering(775) 00:09:05.582 fused_ordering(776) 00:09:05.582 fused_ordering(777) 00:09:05.582 fused_ordering(778) 00:09:05.582 fused_ordering(779) 00:09:05.582 fused_ordering(780) 00:09:05.582 fused_ordering(781) 00:09:05.582 fused_ordering(782) 00:09:05.582 fused_ordering(783) 00:09:05.582 fused_ordering(784) 00:09:05.582 fused_ordering(785) 00:09:05.582 fused_ordering(786) 00:09:05.582 fused_ordering(787) 00:09:05.582 fused_ordering(788) 00:09:05.582 fused_ordering(789) 00:09:05.582 fused_ordering(790) 00:09:05.582 fused_ordering(791) 00:09:05.582 fused_ordering(792) 00:09:05.582 fused_ordering(793) 00:09:05.582 fused_ordering(794) 00:09:05.582 fused_ordering(795) 00:09:05.582 fused_ordering(796) 00:09:05.582 fused_ordering(797) 00:09:05.582 fused_ordering(798) 00:09:05.582 fused_ordering(799) 00:09:05.582 fused_ordering(800) 00:09:05.582 fused_ordering(801) 00:09:05.582 fused_ordering(802) 00:09:05.582 fused_ordering(803) 00:09:05.582 fused_ordering(804) 00:09:05.582 fused_ordering(805) 00:09:05.582 fused_ordering(806) 00:09:05.582 fused_ordering(807) 00:09:05.582 fused_ordering(808) 00:09:05.582 fused_ordering(809) 00:09:05.582 fused_ordering(810) 00:09:05.582 fused_ordering(811) 00:09:05.582 fused_ordering(812) 00:09:05.582 fused_ordering(813) 00:09:05.582 fused_ordering(814) 00:09:05.582 fused_ordering(815) 00:09:05.582 fused_ordering(816) 00:09:05.582 fused_ordering(817) 00:09:05.582 fused_ordering(818) 00:09:05.582 fused_ordering(819) 00:09:05.582 fused_ordering(820) 00:09:06.524 fused_ordering(821) 00:09:06.524 fused_ordering(822) 00:09:06.524 fused_ordering(823) 00:09:06.525 fused_ordering(824) 00:09:06.525 fused_ordering(825) 00:09:06.525 fused_ordering(826) 00:09:06.525 fused_ordering(827) 00:09:06.525 fused_ordering(828) 00:09:06.525 fused_ordering(829) 00:09:06.525 fused_ordering(830) 00:09:06.525 fused_ordering(831) 00:09:06.525 fused_ordering(832) 00:09:06.525 fused_ordering(833) 00:09:06.525 fused_ordering(834) 00:09:06.525 fused_ordering(835) 00:09:06.525 fused_ordering(836) 00:09:06.525 fused_ordering(837) 00:09:06.525 fused_ordering(838) 00:09:06.525 fused_ordering(839) 00:09:06.525 fused_ordering(840) 00:09:06.525 fused_ordering(841) 00:09:06.525 fused_ordering(842) 00:09:06.525 fused_ordering(843) 00:09:06.525 fused_ordering(844) 00:09:06.525 fused_ordering(845) 00:09:06.525 fused_ordering(846) 00:09:06.525 fused_ordering(847) 00:09:06.525 fused_ordering(848) 00:09:06.525 fused_ordering(849) 00:09:06.525 fused_ordering(850) 00:09:06.525 fused_ordering(851) 00:09:06.525 fused_ordering(852) 00:09:06.525 fused_ordering(853) 00:09:06.525 fused_ordering(854) 00:09:06.525 fused_ordering(855) 00:09:06.525 fused_ordering(856) 00:09:06.525 fused_ordering(857) 00:09:06.525 fused_ordering(858) 00:09:06.525 fused_ordering(859) 00:09:06.525 fused_ordering(860) 00:09:06.525 fused_ordering(861) 00:09:06.525 fused_ordering(862) 00:09:06.525 fused_ordering(863) 00:09:06.525 fused_ordering(864) 00:09:06.525 fused_ordering(865) 00:09:06.525 fused_ordering(866) 00:09:06.525 fused_ordering(867) 00:09:06.525 fused_ordering(868) 00:09:06.525 fused_ordering(869) 00:09:06.525 fused_ordering(870) 00:09:06.525 fused_ordering(871) 00:09:06.525 fused_ordering(872) 00:09:06.525 fused_ordering(873) 00:09:06.525 fused_ordering(874) 00:09:06.525 fused_ordering(875) 00:09:06.525 fused_ordering(876) 00:09:06.525 fused_ordering(877) 00:09:06.525 fused_ordering(878) 00:09:06.525 fused_ordering(879) 00:09:06.525 fused_ordering(880) 00:09:06.525 fused_ordering(881) 00:09:06.525 fused_ordering(882) 00:09:06.525 fused_ordering(883) 00:09:06.525 fused_ordering(884) 00:09:06.525 fused_ordering(885) 00:09:06.525 fused_ordering(886) 00:09:06.525 fused_ordering(887) 00:09:06.525 fused_ordering(888) 00:09:06.525 fused_ordering(889) 00:09:06.525 fused_ordering(890) 00:09:06.525 fused_ordering(891) 00:09:06.525 fused_ordering(892) 00:09:06.525 fused_ordering(893) 00:09:06.525 fused_ordering(894) 00:09:06.525 fused_ordering(895) 00:09:06.525 fused_ordering(896) 00:09:06.525 fused_ordering(897) 00:09:06.525 fused_ordering(898) 00:09:06.525 fused_ordering(899) 00:09:06.525 fused_ordering(900) 00:09:06.525 fused_ordering(901) 00:09:06.525 fused_ordering(902) 00:09:06.525 fused_ordering(903) 00:09:06.525 fused_ordering(904) 00:09:06.525 fused_ordering(905) 00:09:06.525 fused_ordering(906) 00:09:06.525 fused_ordering(907) 00:09:06.525 fused_ordering(908) 00:09:06.525 fused_ordering(909) 00:09:06.525 fused_ordering(910) 00:09:06.525 fused_ordering(911) 00:09:06.525 fused_ordering(912) 00:09:06.525 fused_ordering(913) 00:09:06.525 fused_ordering(914) 00:09:06.525 fused_ordering(915) 00:09:06.525 fused_ordering(916) 00:09:06.525 fused_ordering(917) 00:09:06.525 fused_ordering(918) 00:09:06.525 fused_ordering(919) 00:09:06.525 fused_ordering(920) 00:09:06.525 fused_ordering(921) 00:09:06.525 fused_ordering(922) 00:09:06.525 fused_ordering(923) 00:09:06.525 fused_ordering(924) 00:09:06.525 fused_ordering(925) 00:09:06.525 fused_ordering(926) 00:09:06.525 fused_ordering(927) 00:09:06.525 fused_ordering(928) 00:09:06.525 fused_ordering(929) 00:09:06.525 fused_ordering(930) 00:09:06.525 fused_ordering(931) 00:09:06.525 fused_ordering(932) 00:09:06.525 fused_ordering(933) 00:09:06.525 fused_ordering(934) 00:09:06.525 fused_ordering(935) 00:09:06.525 fused_ordering(936) 00:09:06.525 fused_ordering(937) 00:09:06.525 fused_ordering(938) 00:09:06.525 fused_ordering(939) 00:09:06.525 fused_ordering(940) 00:09:06.525 fused_ordering(941) 00:09:06.525 fused_ordering(942) 00:09:06.525 fused_ordering(943) 00:09:06.525 fused_ordering(944) 00:09:06.525 fused_ordering(945) 00:09:06.525 fused_ordering(946) 00:09:06.525 fused_ordering(947) 00:09:06.525 fused_ordering(948) 00:09:06.525 fused_ordering(949) 00:09:06.525 fused_ordering(950) 00:09:06.525 fused_ordering(951) 00:09:06.525 fused_ordering(952) 00:09:06.525 fused_ordering(953) 00:09:06.525 fused_ordering(954) 00:09:06.525 fused_ordering(955) 00:09:06.525 fused_ordering(956) 00:09:06.525 fused_ordering(957) 00:09:06.525 fused_ordering(958) 00:09:06.525 fused_ordering(959) 00:09:06.525 fused_ordering(960) 00:09:06.525 fused_ordering(961) 00:09:06.525 fused_ordering(962) 00:09:06.525 fused_ordering(963) 00:09:06.525 fused_ordering(964) 00:09:06.525 fused_ordering(965) 00:09:06.525 fused_ordering(966) 00:09:06.525 fused_ordering(967) 00:09:06.525 fused_ordering(968) 00:09:06.525 fused_ordering(969) 00:09:06.525 fused_ordering(970) 00:09:06.525 fused_ordering(971) 00:09:06.525 fused_ordering(972) 00:09:06.525 fused_ordering(973) 00:09:06.525 fused_ordering(974) 00:09:06.525 fused_ordering(975) 00:09:06.525 fused_ordering(976) 00:09:06.525 fused_ordering(977) 00:09:06.525 fused_ordering(978) 00:09:06.525 fused_ordering(979) 00:09:06.525 fused_ordering(980) 00:09:06.525 fused_ordering(981) 00:09:06.525 fused_ordering(982) 00:09:06.525 fused_ordering(983) 00:09:06.525 fused_ordering(984) 00:09:06.525 fused_ordering(985) 00:09:06.525 fused_ordering(986) 00:09:06.525 fused_ordering(987) 00:09:06.525 fused_ordering(988) 00:09:06.525 fused_ordering(989) 00:09:06.525 fused_ordering(990) 00:09:06.525 fused_ordering(991) 00:09:06.525 fused_ordering(992) 00:09:06.525 fused_ordering(993) 00:09:06.525 fused_ordering(994) 00:09:06.525 fused_ordering(995) 00:09:06.525 fused_ordering(996) 00:09:06.525 fused_ordering(997) 00:09:06.525 fused_ordering(998) 00:09:06.525 fused_ordering(999) 00:09:06.525 fused_ordering(1000) 00:09:06.525 fused_ordering(1001) 00:09:06.525 fused_ordering(1002) 00:09:06.525 fused_ordering(1003) 00:09:06.525 fused_ordering(1004) 00:09:06.525 fused_ordering(1005) 00:09:06.525 fused_ordering(1006) 00:09:06.525 fused_ordering(1007) 00:09:06.525 fused_ordering(1008) 00:09:06.525 fused_ordering(1009) 00:09:06.525 fused_ordering(1010) 00:09:06.525 fused_ordering(1011) 00:09:06.525 fused_ordering(1012) 00:09:06.525 fused_ordering(1013) 00:09:06.525 fused_ordering(1014) 00:09:06.525 fused_ordering(1015) 00:09:06.525 fused_ordering(1016) 00:09:06.525 fused_ordering(1017) 00:09:06.525 fused_ordering(1018) 00:09:06.525 fused_ordering(1019) 00:09:06.525 fused_ordering(1020) 00:09:06.525 fused_ordering(1021) 00:09:06.525 fused_ordering(1022) 00:09:06.525 fused_ordering(1023) 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.525 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.525 rmmod nvme_tcp 00:09:06.525 rmmod nvme_fabrics 00:09:06.784 rmmod nvme_keyring 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2240378 ']' 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2240378 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2240378 ']' 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2240378 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2240378 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:06.784 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:06.785 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2240378' 00:09:06.785 killing process with pid 2240378 00:09:06.785 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2240378 00:09:06.785 [2024-05-15 02:25:53.998654] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:06.785 02:25:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2240378 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.044 02:25:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.949 02:25:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.949 00:09:08.949 real 0m10.405s 00:09:08.949 user 0m7.724s 00:09:08.949 sys 0m5.319s 00:09:08.949 02:25:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.949 02:25:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:08.949 ************************************ 00:09:08.949 END TEST nvmf_fused_ordering 00:09:08.949 ************************************ 00:09:08.949 02:25:56 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:08.949 02:25:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:08.949 02:25:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.949 02:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.208 ************************************ 00:09:09.208 START TEST nvmf_delete_subsystem 00:09:09.208 ************************************ 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:09.208 * Looking for test storage... 00:09:09.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.208 02:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.744 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:11.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:11.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:11.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:11.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.745 02:25:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:09:11.745 00:09:11.745 --- 10.0.0.2 ping statistics --- 00:09:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.745 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:09:11.745 00:09:11.745 --- 10.0.0.1 ping statistics --- 00:09:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.745 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2243278 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2243278 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2243278 ']' 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:11.745 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.004 [2024-05-15 02:25:59.184866] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:12.004 [2024-05-15 02:25:59.184968] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.004 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.004 [2024-05-15 02:25:59.267355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.004 [2024-05-15 02:25:59.388439] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.004 [2024-05-15 02:25:59.388504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.004 [2024-05-15 02:25:59.388519] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.004 [2024-05-15 02:25:59.388532] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.004 [2024-05-15 02:25:59.388544] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.004 [2024-05-15 02:25:59.389951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.004 [2024-05-15 02:25:59.389962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 [2024-05-15 02:25:59.542876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 [2024-05-15 02:25:59.558851] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:12.262 [2024-05-15 02:25:59.559175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.262 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.263 NULL1 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.263 Delay0 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2243306 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:12.263 02:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:12.263 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.263 [2024-05-15 02:25:59.633981] subsystem.c:1536:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:14.790 02:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.790 02:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.790 02:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.790 Read completed with error (sct=0, sc=8) 00:09:14.790 starting I/O failed: -6 00:09:14.790 Read completed with error (sct=0, sc=8) 00:09:14.790 Read completed with error (sct=0, sc=8) 00:09:14.790 Write completed with error (sct=0, sc=8) 00:09:14.790 Read completed with error (sct=0, sc=8) 00:09:14.790 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 [2024-05-15 02:26:01.724637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d5e10 is same with the state(5) to be set 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Write completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 starting I/O failed: -6 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.791 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Write completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Write completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 starting I/O failed: -6 00:09:14.792 Read completed with error (sct=0, sc=8) 00:09:14.792 [2024-05-15 02:26:01.726182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffbf0000c00 is same with the state(5) to be set 00:09:15.357 [2024-05-15 02:26:02.696426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f57f0 is same with the state(5) to be set 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 [2024-05-15 02:26:02.727205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffbf000bfe0 is same with the state(5) to be set 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Write completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.357 [2024-05-15 02:26:02.727384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc790 is same with the state(5) to be set 00:09:15.357 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 [2024-05-15 02:26:02.728047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffbf000c600 is same with the state(5) to be set 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Write completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 Read completed with error (sct=0, sc=8) 00:09:15.358 [2024-05-15 02:26:02.728224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d6880 is same with the state(5) to be set 00:09:15.358 Initializing NVMe Controllers 00:09:15.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.358 Controller IO queue size 128, less than required. 00:09:15.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:15.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:15.358 Initialization complete. Launching workers. 00:09:15.358 ======================================================== 00:09:15.358 Latency(us) 00:09:15.358 Device Information : IOPS MiB/s Average min max 00:09:15.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.29 0.08 907671.11 579.80 2003209.11 00:09:15.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 184.17 0.09 924277.04 607.62 2002563.18 00:09:15.358 ======================================================== 00:09:15.358 Total : 351.45 0.17 916372.81 579.80 2003209.11 00:09:15.358 00:09:15.358 [2024-05-15 02:26:02.729469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f57f0 (9): Bad file descriptor 00:09:15.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:15.358 02:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.358 02:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:15.358 02:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2243306 00:09:15.358 02:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2243306 00:09:15.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2243306) - No such process 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2243306 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2243306 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2243306 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.923 [2024-05-15 02:26:03.253438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2243810 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:15.923 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.923 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.923 [2024-05-15 02:26:03.317235] subsystem.c:1536:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:16.489 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.489 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:16.489 02:26:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.054 02:26:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.054 02:26:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:17.054 02:26:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.619 02:26:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.619 02:26:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:17.619 02:26:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.876 02:26:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.876 02:26:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:17.876 02:26:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:18.442 02:26:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.442 02:26:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:18.442 02:26:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.009 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:19.009 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:19.009 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.267 Initializing NVMe Controllers 00:09:19.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:19.267 Controller IO queue size 128, less than required. 00:09:19.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:19.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:19.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:19.267 Initialization complete. Launching workers. 00:09:19.267 ======================================================== 00:09:19.267 Latency(us) 00:09:19.267 Device Information : IOPS MiB/s Average min max 00:09:19.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003692.41 1000243.41 1043501.99 00:09:19.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005041.41 1000253.81 1011692.91 00:09:19.267 ======================================================== 00:09:19.267 Total : 256.00 0.12 1004366.91 1000243.41 1043501.99 00:09:19.267 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2243810 00:09:19.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2243810) - No such process 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2243810 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.524 rmmod nvme_tcp 00:09:19.524 rmmod nvme_fabrics 00:09:19.524 rmmod nvme_keyring 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2243278 ']' 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2243278 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2243278 ']' 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2243278 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2243278 00:09:19.524 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:19.525 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:19.525 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2243278' 00:09:19.525 killing process with pid 2243278 00:09:19.525 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2243278 00:09:19.525 [2024-05-15 02:26:06.868015] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:19.525 02:26:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2243278 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.782 02:26:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.334 02:26:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.334 00:09:22.334 real 0m12.829s 00:09:22.334 user 0m27.792s 00:09:22.334 sys 0m3.369s 00:09:22.334 02:26:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.334 02:26:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.334 ************************************ 00:09:22.334 END TEST nvmf_delete_subsystem 00:09:22.334 ************************************ 00:09:22.334 02:26:09 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:22.334 02:26:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:22.334 02:26:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.334 02:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.334 ************************************ 00:09:22.334 START TEST nvmf_ns_masking 00:09:22.334 ************************************ 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:22.334 * Looking for test storage... 00:09:22.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=a90a9a1f-3fcf-43d7-a10c-53220807bf92 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.334 02:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.865 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.865 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:09:24.866 00:09:24.866 --- 10.0.0.2 ping statistics --- 00:09:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.866 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:24.866 00:09:24.866 --- 10.0.0.1 ping statistics --- 00:09:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.866 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2246470 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2246470 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2246470 ']' 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:24.866 02:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:24.866 [2024-05-15 02:26:11.909266] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:24.866 [2024-05-15 02:26:11.909344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.866 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.866 [2024-05-15 02:26:11.993688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.866 [2024-05-15 02:26:12.116160] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.866 [2024-05-15 02:26:12.116223] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.866 [2024-05-15 02:26:12.116240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.866 [2024-05-15 02:26:12.116253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.866 [2024-05-15 02:26:12.116265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.866 [2024-05-15 02:26:12.116325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.866 [2024-05-15 02:26:12.116353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.866 [2024-05-15 02:26:12.116474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.866 [2024-05-15 02:26:12.116478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.866 02:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.124 [2024-05-15 02:26:12.490288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.124 02:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:25.124 02:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:25.124 02:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:25.382 Malloc1 00:09:25.382 02:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:25.640 Malloc2 00:09:25.640 02:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.897 02:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:26.156 02:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.424 [2024-05-15 02:26:13.762543] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:26.424 [2024-05-15 02:26:13.762857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.424 02:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:26.424 02:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a90a9a1f-3fcf-43d7-a10c-53220807bf92 -a 10.0.0.2 -s 4420 -i 4 00:09:26.682 02:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.682 02:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:26.682 02:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.682 02:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:26.682 02:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:28.583 [ 0]:0x1 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:28.583 02:26:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:28.840 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6433a96bc55c438a90222395ee54f5d9 00:09:28.840 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6433a96bc55c438a90222395ee54f5d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.840 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:29.099 [ 0]:0x1 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6433a96bc55c438a90222395ee54f5d9 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6433a96bc55c438a90222395ee54f5d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:29.099 [ 1]:0x2 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:29.099 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.357 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.615 02:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:29.872 02:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:29.872 02:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a90a9a1f-3fcf-43d7-a10c-53220807bf92 -a 10.0.0.2 -s 4420 -i 4 00:09:30.130 02:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:30.130 02:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:30.130 02:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.130 02:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:09:30.130 02:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:09:30.130 02:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:32.030 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:32.288 [ 0]:0x2 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.288 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:32.546 [ 0]:0x1 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6433a96bc55c438a90222395ee54f5d9 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6433a96bc55c438a90222395ee54f5d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:32.546 [ 1]:0x2 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:32.546 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:32.804 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:32.804 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.804 02:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:33.062 [ 0]:0x2 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.062 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:33.320 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:33.320 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a90a9a1f-3fcf-43d7-a10c-53220807bf92 -a 10.0.0.2 -s 4420 -i 4 00:09:33.578 02:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:33.578 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:33.578 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.578 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:33.578 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:33.578 02:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:35.476 02:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:35.476 02:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:35.477 [ 0]:0x1 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6433a96bc55c438a90222395ee54f5d9 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6433a96bc55c438a90222395ee54f5d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:35.477 [ 1]:0x2 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:35.477 02:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:35.735 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:35.993 [ 0]:0x2 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:35.993 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:36.257 [2024-05-15 02:26:23.409907] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:36.257 request: 00:09:36.257 { 00:09:36.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.257 "nsid": 2, 00:09:36.257 "host": "nqn.2016-06.io.spdk:host1", 00:09:36.257 "method": "nvmf_ns_remove_host", 00:09:36.257 "req_id": 1 00:09:36.257 } 00:09:36.257 Got JSON-RPC error response 00:09:36.257 response: 00:09:36.257 { 00:09:36.257 "code": -32602, 00:09:36.257 "message": "Invalid parameters" 00:09:36.257 } 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:36.257 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:36.258 [ 0]:0x2 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=e133944cd296499b9e198dada5d33ab7 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ e133944cd296499b9e198dada5d33ab7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:36.258 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.519 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.519 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:36.519 02:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:36.519 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.519 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:36.520 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.520 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:36.520 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.520 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.821 rmmod nvme_tcp 00:09:36.821 rmmod nvme_fabrics 00:09:36.821 rmmod nvme_keyring 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2246470 ']' 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2246470 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2246470 ']' 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2246470 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:36.821 02:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2246470 00:09:36.821 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:36.821 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:36.821 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2246470' 00:09:36.821 killing process with pid 2246470 00:09:36.821 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2246470 00:09:36.821 [2024-05-15 02:26:24.018887] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:36.821 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2246470 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.081 02:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.006 02:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.006 00:09:39.006 real 0m17.123s 00:09:39.006 user 0m52.000s 00:09:39.006 sys 0m4.067s 00:09:39.006 02:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.006 02:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:39.006 ************************************ 00:09:39.006 END TEST nvmf_ns_masking 00:09:39.006 ************************************ 00:09:39.006 02:26:26 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:39.006 02:26:26 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:39.006 02:26:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:39.006 02:26:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.006 02:26:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.266 ************************************ 00:09:39.266 START TEST nvmf_nvme_cli 00:09:39.266 ************************************ 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:39.266 * Looking for test storage... 00:09:39.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.266 02:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:41.799 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:41.799 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:41.799 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:41.799 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.799 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.800 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.800 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.800 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.800 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.800 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.800 02:26:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:09:41.800 00:09:41.800 --- 10.0.0.2 ping statistics --- 00:09:41.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.800 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:09:41.800 00:09:41.800 --- 10.0.0.1 ping statistics --- 00:09:41.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.800 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2250319 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2250319 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2250319 ']' 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:41.800 02:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.800 [2024-05-15 02:26:29.195967] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:41.800 [2024-05-15 02:26:29.196062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.059 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.059 [2024-05-15 02:26:29.280330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.059 [2024-05-15 02:26:29.404008] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.059 [2024-05-15 02:26:29.404059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.059 [2024-05-15 02:26:29.404085] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.059 [2024-05-15 02:26:29.404099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.059 [2024-05-15 02:26:29.404111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.059 [2024-05-15 02:26:29.404185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.059 [2024-05-15 02:26:29.405952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.059 [2024-05-15 02:26:29.405999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.059 [2024-05-15 02:26:29.406004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 [2024-05-15 02:26:30.229990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 Malloc0 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 Malloc1 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 [2024-05-15 02:26:30.316219] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:42.995 [2024-05-15 02:26:30.316555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.995 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:42.995 00:09:42.995 Discovery Log Number of Records 2, Generation counter 2 00:09:42.995 =====Discovery Log Entry 0====== 00:09:42.995 trtype: tcp 00:09:42.995 adrfam: ipv4 00:09:42.995 subtype: current discovery subsystem 00:09:42.995 treq: not required 00:09:42.995 portid: 0 00:09:42.995 trsvcid: 4420 00:09:42.995 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:42.995 traddr: 10.0.0.2 00:09:42.995 eflags: explicit discovery connections, duplicate discovery information 00:09:42.995 sectype: none 00:09:42.995 =====Discovery Log Entry 1====== 00:09:42.995 trtype: tcp 00:09:42.996 adrfam: ipv4 00:09:42.996 subtype: nvme subsystem 00:09:42.996 treq: not required 00:09:42.996 portid: 0 00:09:42.996 trsvcid: 4420 00:09:42.996 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:42.996 traddr: 10.0.0.2 00:09:42.996 eflags: none 00:09:42.996 sectype: none 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:42.996 02:26:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:43.930 02:26:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:43.930 02:26:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:09:43.930 02:26:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.930 02:26:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:43.930 02:26:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:43.930 02:26:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:45.828 /dev/nvme0n1 ]] 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:45.828 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:46.086 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.343 rmmod nvme_tcp 00:09:46.343 rmmod nvme_fabrics 00:09:46.343 rmmod nvme_keyring 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.343 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2250319 ']' 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2250319 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2250319 ']' 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2250319 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2250319 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2250319' 00:09:46.344 killing process with pid 2250319 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2250319 00:09:46.344 [2024-05-15 02:26:33.685674] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:46.344 02:26:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2250319 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.602 02:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.135 02:26:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.135 00:09:49.135 real 0m9.618s 00:09:49.135 user 0m19.020s 00:09:49.135 sys 0m2.584s 00:09:49.135 02:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:49.135 02:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:49.135 ************************************ 00:09:49.135 END TEST nvmf_nvme_cli 00:09:49.135 ************************************ 00:09:49.135 02:26:36 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:49.135 02:26:36 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.135 02:26:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:49.135 02:26:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:49.135 02:26:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.135 ************************************ 00:09:49.135 START TEST nvmf_vfio_user 00:09:49.135 ************************************ 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.135 * Looking for test storage... 00:09:49.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2251374 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2251374' 00:09:49.135 Process pid: 2251374 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2251374 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2251374 ']' 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:49.135 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.136 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:49.136 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:49.136 [2024-05-15 02:26:36.216679] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:49.136 [2024-05-15 02:26:36.216762] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.136 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.136 [2024-05-15 02:26:36.287654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.136 [2024-05-15 02:26:36.397115] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.136 [2024-05-15 02:26:36.397166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.136 [2024-05-15 02:26:36.397180] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.136 [2024-05-15 02:26:36.397192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.136 [2024-05-15 02:26:36.397210] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.136 [2024-05-15 02:26:36.397273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.136 [2024-05-15 02:26:36.397309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.136 [2024-05-15 02:26:36.397358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.136 [2024-05-15 02:26:36.397361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.136 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:49.136 02:26:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:09:49.136 02:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:50.504 02:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:50.504 02:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:50.504 02:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:50.504 02:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:50.504 02:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:50.504 02:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:50.761 Malloc1 00:09:50.761 02:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:51.019 02:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:51.276 02:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:51.533 [2024-05-15 02:26:38.759058] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:51.533 02:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:51.533 02:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:51.533 02:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:51.791 Malloc2 00:09:51.791 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:52.048 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:52.305 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:52.565 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:52.565 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:52.565 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.565 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:52.565 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:52.565 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:52.565 [2024-05-15 02:26:39.806722] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:09:52.565 [2024-05-15 02:26:39.806768] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251793 ] 00:09:52.565 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.565 [2024-05-15 02:26:39.841157] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:52.565 [2024-05-15 02:26:39.843740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.565 [2024-05-15 02:26:39.843767] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa829105000 00:09:52.565 [2024-05-15 02:26:39.844738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.845734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.846735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.847739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.848745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.849752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.850759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.851766] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.565 [2024-05-15 02:26:39.852772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.565 [2024-05-15 02:26:39.852795] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa8290fa000 00:09:52.565 [2024-05-15 02:26:39.853927] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.565 [2024-05-15 02:26:39.868516] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:52.565 [2024-05-15 02:26:39.868553] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:52.565 [2024-05-15 02:26:39.873936] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:52.565 [2024-05-15 02:26:39.874002] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:52.565 [2024-05-15 02:26:39.874105] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:52.565 [2024-05-15 02:26:39.874151] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:52.565 [2024-05-15 02:26:39.874162] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:52.565 [2024-05-15 02:26:39.874918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:52.565 [2024-05-15 02:26:39.874943] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:52.565 [2024-05-15 02:26:39.874963] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:52.565 [2024-05-15 02:26:39.875908] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:52.565 [2024-05-15 02:26:39.875945] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:52.566 [2024-05-15 02:26:39.875960] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:52.566 [2024-05-15 02:26:39.876921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:52.566 [2024-05-15 02:26:39.876962] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:52.566 [2024-05-15 02:26:39.877945] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:52.566 [2024-05-15 02:26:39.877964] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:52.566 [2024-05-15 02:26:39.877974] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:52.566 [2024-05-15 02:26:39.877986] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:52.566 [2024-05-15 02:26:39.878096] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:52.566 [2024-05-15 02:26:39.878104] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:52.566 [2024-05-15 02:26:39.878113] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:52.566 [2024-05-15 02:26:39.879939] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:52.566 [2024-05-15 02:26:39.880957] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:52.566 [2024-05-15 02:26:39.881961] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:52.566 [2024-05-15 02:26:39.882956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.566 [2024-05-15 02:26:39.883104] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:52.566 [2024-05-15 02:26:39.883970] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:52.566 [2024-05-15 02:26:39.883988] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:52.566 [2024-05-15 02:26:39.883998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884023] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:52.566 [2024-05-15 02:26:39.884043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884073] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.566 [2024-05-15 02:26:39.884087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.566 [2024-05-15 02:26:39.884112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884226] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:52.566 [2024-05-15 02:26:39.884234] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:52.566 [2024-05-15 02:26:39.884242] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:52.566 [2024-05-15 02:26:39.884249] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:52.566 [2024-05-15 02:26:39.884256] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:52.566 [2024-05-15 02:26:39.884264] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:52.566 [2024-05-15 02:26:39.884271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884289] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.566 [2024-05-15 02:26:39.884357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.566 [2024-05-15 02:26:39.884369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.566 [2024-05-15 02:26:39.884380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.566 [2024-05-15 02:26:39.884389] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884405] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884442] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:52.566 [2024-05-15 02:26:39.884450] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884461] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884475] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884558] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884588] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:52.566 [2024-05-15 02:26:39.884596] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:52.566 [2024-05-15 02:26:39.884605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884643] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:52.566 [2024-05-15 02:26:39.884659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884673] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884685] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.566 [2024-05-15 02:26:39.884693] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.566 [2024-05-15 02:26:39.884702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884753] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884767] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884778] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.566 [2024-05-15 02:26:39.884786] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.566 [2024-05-15 02:26:39.884794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.566 [2024-05-15 02:26:39.884809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:52.566 [2024-05-15 02:26:39.884829] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884841] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884866] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884874] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:52.566 [2024-05-15 02:26:39.884886] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:52.566 [2024-05-15 02:26:39.884894] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:52.567 [2024-05-15 02:26:39.884902] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:52.567 [2024-05-15 02:26:39.884963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.884985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.885017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.885045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.885073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885093] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:52.567 [2024-05-15 02:26:39.885102] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:52.567 [2024-05-15 02:26:39.885109] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:52.567 [2024-05-15 02:26:39.885115] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:52.567 [2024-05-15 02:26:39.885125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:52.567 [2024-05-15 02:26:39.885137] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:52.567 [2024-05-15 02:26:39.885145] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:52.567 [2024-05-15 02:26:39.885154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.885165] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:52.567 [2024-05-15 02:26:39.885174] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.567 [2024-05-15 02:26:39.885183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.885199] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:52.567 [2024-05-15 02:26:39.885208] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:52.567 [2024-05-15 02:26:39.885233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:52.567 [2024-05-15 02:26:39.885245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:52.567 [2024-05-15 02:26:39.885299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:52.567 ===================================================== 00:09:52.567 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:52.567 ===================================================== 00:09:52.567 Controller Capabilities/Features 00:09:52.567 ================================ 00:09:52.567 Vendor ID: 4e58 00:09:52.567 Subsystem Vendor ID: 4e58 00:09:52.567 Serial Number: SPDK1 00:09:52.567 Model Number: SPDK bdev Controller 00:09:52.567 Firmware Version: 24.05 00:09:52.567 Recommended Arb Burst: 6 00:09:52.567 IEEE OUI Identifier: 8d 6b 50 00:09:52.567 Multi-path I/O 00:09:52.567 May have multiple subsystem ports: Yes 00:09:52.567 May have multiple controllers: Yes 00:09:52.567 Associated with SR-IOV VF: No 00:09:52.567 Max Data Transfer Size: 131072 00:09:52.567 Max Number of Namespaces: 32 00:09:52.567 Max Number of I/O Queues: 127 00:09:52.567 NVMe Specification Version (VS): 1.3 00:09:52.567 NVMe Specification Version (Identify): 1.3 00:09:52.567 Maximum Queue Entries: 256 00:09:52.567 Contiguous Queues Required: Yes 00:09:52.567 Arbitration Mechanisms Supported 00:09:52.567 Weighted Round Robin: Not Supported 00:09:52.567 Vendor Specific: Not Supported 00:09:52.567 Reset Timeout: 15000 ms 00:09:52.567 Doorbell Stride: 4 bytes 00:09:52.567 NVM Subsystem Reset: Not Supported 00:09:52.567 Command Sets Supported 00:09:52.567 NVM Command Set: Supported 00:09:52.567 Boot Partition: Not Supported 00:09:52.567 Memory Page Size Minimum: 4096 bytes 00:09:52.567 Memory Page Size Maximum: 4096 bytes 00:09:52.567 Persistent Memory Region: Not Supported 00:09:52.567 Optional Asynchronous Events Supported 00:09:52.567 Namespace Attribute Notices: Supported 00:09:52.567 Firmware Activation Notices: Not Supported 00:09:52.567 ANA Change Notices: Not Supported 00:09:52.567 PLE Aggregate Log Change Notices: Not Supported 00:09:52.567 LBA Status Info Alert Notices: Not Supported 00:09:52.567 EGE Aggregate Log Change Notices: Not Supported 00:09:52.567 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.567 Zone Descriptor Change Notices: Not Supported 00:09:52.567 Discovery Log Change Notices: Not Supported 00:09:52.567 Controller Attributes 00:09:52.567 128-bit Host Identifier: Supported 00:09:52.567 Non-Operational Permissive Mode: Not Supported 00:09:52.567 NVM Sets: Not Supported 00:09:52.567 Read Recovery Levels: Not Supported 00:09:52.567 Endurance Groups: Not Supported 00:09:52.567 Predictable Latency Mode: Not Supported 00:09:52.567 Traffic Based Keep ALive: Not Supported 00:09:52.567 Namespace Granularity: Not Supported 00:09:52.567 SQ Associations: Not Supported 00:09:52.567 UUID List: Not Supported 00:09:52.567 Multi-Domain Subsystem: Not Supported 00:09:52.567 Fixed Capacity Management: Not Supported 00:09:52.567 Variable Capacity Management: Not Supported 00:09:52.567 Delete Endurance Group: Not Supported 00:09:52.567 Delete NVM Set: Not Supported 00:09:52.567 Extended LBA Formats Supported: Not Supported 00:09:52.567 Flexible Data Placement Supported: Not Supported 00:09:52.567 00:09:52.567 Controller Memory Buffer Support 00:09:52.567 ================================ 00:09:52.567 Supported: No 00:09:52.567 00:09:52.567 Persistent Memory Region Support 00:09:52.567 ================================ 00:09:52.567 Supported: No 00:09:52.567 00:09:52.567 Admin Command Set Attributes 00:09:52.567 ============================ 00:09:52.567 Security Send/Receive: Not Supported 00:09:52.567 Format NVM: Not Supported 00:09:52.567 Firmware Activate/Download: Not Supported 00:09:52.567 Namespace Management: Not Supported 00:09:52.567 Device Self-Test: Not Supported 00:09:52.567 Directives: Not Supported 00:09:52.567 NVMe-MI: Not Supported 00:09:52.568 Virtualization Management: Not Supported 00:09:52.568 Doorbell Buffer Config: Not Supported 00:09:52.568 Get LBA Status Capability: Not Supported 00:09:52.568 Command & Feature Lockdown Capability: Not Supported 00:09:52.568 Abort Command Limit: 4 00:09:52.568 Async Event Request Limit: 4 00:09:52.568 Number of Firmware Slots: N/A 00:09:52.568 Firmware Slot 1 Read-Only: N/A 00:09:52.568 Firmware Activation Without Reset: N/A 00:09:52.568 Multiple Update Detection Support: N/A 00:09:52.568 Firmware Update Granularity: No Information Provided 00:09:52.568 Per-Namespace SMART Log: No 00:09:52.568 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.568 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:52.568 Command Effects Log Page: Supported 00:09:52.568 Get Log Page Extended Data: Supported 00:09:52.568 Telemetry Log Pages: Not Supported 00:09:52.568 Persistent Event Log Pages: Not Supported 00:09:52.568 Supported Log Pages Log Page: May Support 00:09:52.568 Commands Supported & Effects Log Page: Not Supported 00:09:52.568 Feature Identifiers & Effects Log Page:May Support 00:09:52.568 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.568 Data Area 4 for Telemetry Log: Not Supported 00:09:52.568 Error Log Page Entries Supported: 128 00:09:52.568 Keep Alive: Supported 00:09:52.568 Keep Alive Granularity: 10000 ms 00:09:52.568 00:09:52.568 NVM Command Set Attributes 00:09:52.568 ========================== 00:09:52.568 Submission Queue Entry Size 00:09:52.568 Max: 64 00:09:52.568 Min: 64 00:09:52.568 Completion Queue Entry Size 00:09:52.568 Max: 16 00:09:52.568 Min: 16 00:09:52.568 Number of Namespaces: 32 00:09:52.568 Compare Command: Supported 00:09:52.568 Write Uncorrectable Command: Not Supported 00:09:52.568 Dataset Management Command: Supported 00:09:52.568 Write Zeroes Command: Supported 00:09:52.568 Set Features Save Field: Not Supported 00:09:52.568 Reservations: Not Supported 00:09:52.568 Timestamp: Not Supported 00:09:52.568 Copy: Supported 00:09:52.568 Volatile Write Cache: Present 00:09:52.568 Atomic Write Unit (Normal): 1 00:09:52.568 Atomic Write Unit (PFail): 1 00:09:52.568 Atomic Compare & Write Unit: 1 00:09:52.568 Fused Compare & Write: Supported 00:09:52.568 Scatter-Gather List 00:09:52.568 SGL Command Set: Supported (Dword aligned) 00:09:52.568 SGL Keyed: Not Supported 00:09:52.568 SGL Bit Bucket Descriptor: Not Supported 00:09:52.568 SGL Metadata Pointer: Not Supported 00:09:52.568 Oversized SGL: Not Supported 00:09:52.568 SGL Metadata Address: Not Supported 00:09:52.568 SGL Offset: Not Supported 00:09:52.568 Transport SGL Data Block: Not Supported 00:09:52.568 Replay Protected Memory Block: Not Supported 00:09:52.568 00:09:52.568 Firmware Slot Information 00:09:52.568 ========================= 00:09:52.568 Active slot: 1 00:09:52.568 Slot 1 Firmware Revision: 24.05 00:09:52.568 00:09:52.568 00:09:52.568 Commands Supported and Effects 00:09:52.568 ============================== 00:09:52.568 Admin Commands 00:09:52.568 -------------- 00:09:52.568 Get Log Page (02h): Supported 00:09:52.568 Identify (06h): Supported 00:09:52.568 Abort (08h): Supported 00:09:52.568 Set Features (09h): Supported 00:09:52.568 Get Features (0Ah): Supported 00:09:52.568 Asynchronous Event Request (0Ch): Supported 00:09:52.568 Keep Alive (18h): Supported 00:09:52.568 I/O Commands 00:09:52.568 ------------ 00:09:52.568 Flush (00h): Supported LBA-Change 00:09:52.568 Write (01h): Supported LBA-Change 00:09:52.568 Read (02h): Supported 00:09:52.568 Compare (05h): Supported 00:09:52.568 Write Zeroes (08h): Supported LBA-Change 00:09:52.568 Dataset Management (09h): Supported LBA-Change 00:09:52.568 Copy (19h): Supported LBA-Change 00:09:52.568 Unknown (79h): Supported LBA-Change 00:09:52.568 Unknown (7Ah): Supported 00:09:52.568 00:09:52.568 Error Log 00:09:52.568 ========= 00:09:52.568 00:09:52.568 Arbitration 00:09:52.568 =========== 00:09:52.568 Arbitration Burst: 1 00:09:52.568 00:09:52.568 Power Management 00:09:52.568 ================ 00:09:52.568 Number of Power States: 1 00:09:52.568 Current Power State: Power State #0 00:09:52.568 Power State #0: 00:09:52.568 Max Power: 0.00 W 00:09:52.568 Non-Operational State: Operational 00:09:52.568 Entry Latency: Not Reported 00:09:52.568 Exit Latency: Not Reported 00:09:52.568 Relative Read Throughput: 0 00:09:52.568 Relative Read Latency: 0 00:09:52.568 Relative Write Throughput: 0 00:09:52.568 Relative Write Latency: 0 00:09:52.568 Idle Power: Not Reported 00:09:52.568 Active Power: Not Reported 00:09:52.568 Non-Operational Permissive Mode: Not Supported 00:09:52.568 00:09:52.568 Health Information 00:09:52.568 ================== 00:09:52.568 Critical Warnings: 00:09:52.568 Available Spare Space: OK 00:09:52.568 Temperature: OK 00:09:52.568 Device Reliability: OK 00:09:52.568 Read Only: No 00:09:52.568 Volatile Memory Backup: OK 00:09:52.568 Current Temperature: 0 Kelvin (-2[2024-05-15 02:26:39.885425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:52.568 [2024-05-15 02:26:39.885441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:52.568 [2024-05-15 02:26:39.885483] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:52.568 [2024-05-15 02:26:39.885499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.568 [2024-05-15 02:26:39.885510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.568 [2024-05-15 02:26:39.885519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.568 [2024-05-15 02:26:39.885528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.568 [2024-05-15 02:26:39.887941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:52.568 [2024-05-15 02:26:39.887966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:52.568 [2024-05-15 02:26:39.888991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.568 [2024-05-15 02:26:39.889065] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:52.568 [2024-05-15 02:26:39.889080] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:52.568 [2024-05-15 02:26:39.889999] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:52.568 [2024-05-15 02:26:39.890024] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:52.568 [2024-05-15 02:26:39.890083] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:52.568 [2024-05-15 02:26:39.892943] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.568 73 Celsius) 00:09:52.568 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:52.568 Available Spare: 0% 00:09:52.568 Available Spare Threshold: 0% 00:09:52.568 Life Percentage Used: 0% 00:09:52.568 Data Units Read: 0 00:09:52.568 Data Units Written: 0 00:09:52.568 Host Read Commands: 0 00:09:52.568 Host Write Commands: 0 00:09:52.568 Controller Busy Time: 0 minutes 00:09:52.568 Power Cycles: 0 00:09:52.568 Power On Hours: 0 hours 00:09:52.568 Unsafe Shutdowns: 0 00:09:52.568 Unrecoverable Media Errors: 0 00:09:52.568 Lifetime Error Log Entries: 0 00:09:52.568 Warning Temperature Time: 0 minutes 00:09:52.568 Critical Temperature Time: 0 minutes 00:09:52.568 00:09:52.568 Number of Queues 00:09:52.568 ================ 00:09:52.568 Number of I/O Submission Queues: 127 00:09:52.568 Number of I/O Completion Queues: 127 00:09:52.568 00:09:52.568 Active Namespaces 00:09:52.568 ================= 00:09:52.568 Namespace ID:1 00:09:52.568 Error Recovery Timeout: Unlimited 00:09:52.568 Command Set Identifier: NVM (00h) 00:09:52.568 Deallocate: Supported 00:09:52.568 Deallocated/Unwritten Error: Not Supported 00:09:52.568 Deallocated Read Value: Unknown 00:09:52.568 Deallocate in Write Zeroes: Not Supported 00:09:52.569 Deallocated Guard Field: 0xFFFF 00:09:52.569 Flush: Supported 00:09:52.569 Reservation: Supported 00:09:52.569 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.569 Size (in LBAs): 131072 (0GiB) 00:09:52.569 Capacity (in LBAs): 131072 (0GiB) 00:09:52.569 Utilization (in LBAs): 131072 (0GiB) 00:09:52.569 NGUID: 65EF873AFEB24B5E98B739B1C2B43901 00:09:52.569 UUID: 65ef873a-feb2-4b5e-98b7-39b1c2b43901 00:09:52.569 Thin Provisioning: Not Supported 00:09:52.569 Per-NS Atomic Units: Yes 00:09:52.569 Atomic Boundary Size (Normal): 0 00:09:52.569 Atomic Boundary Size (PFail): 0 00:09:52.569 Atomic Boundary Offset: 0 00:09:52.569 Maximum Single Source Range Length: 65535 00:09:52.569 Maximum Copy Length: 65535 00:09:52.569 Maximum Source Range Count: 1 00:09:52.569 NGUID/EUI64 Never Reused: No 00:09:52.569 Namespace Write Protected: No 00:09:52.569 Number of LBA Formats: 1 00:09:52.569 Current LBA Format: LBA Format #00 00:09:52.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.569 00:09:52.569 02:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:52.569 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.827 [2024-05-15 02:26:40.121849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.155 Initializing NVMe Controllers 00:09:58.155 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:58.155 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:58.155 Initialization complete. Launching workers. 00:09:58.155 ======================================================== 00:09:58.155 Latency(us) 00:09:58.155 Device Information : IOPS MiB/s Average min max 00:09:58.155 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34590.06 135.12 3700.34 1149.92 8199.99 00:09:58.155 ======================================================== 00:09:58.155 Total : 34590.06 135.12 3700.34 1149.92 8199.99 00:09:58.155 00:09:58.155 [2024-05-15 02:26:45.144993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:58.155 02:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:58.155 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.155 [2024-05-15 02:26:45.391204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.419 Initializing NVMe Controllers 00:10:03.419 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:03.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:03.419 Initialization complete. Launching workers. 00:10:03.419 ======================================================== 00:10:03.419 Latency(us) 00:10:03.419 Device Information : IOPS MiB/s Average min max 00:10:03.419 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.39 62.74 7975.11 6809.99 8106.38 00:10:03.419 ======================================================== 00:10:03.419 Total : 16060.39 62.74 7975.11 6809.99 8106.38 00:10:03.419 00:10:03.419 [2024-05-15 02:26:50.430235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:03.419 02:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:03.419 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.419 [2024-05-15 02:26:50.677420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:08.691 [2024-05-15 02:26:55.760356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:08.691 Initializing NVMe Controllers 00:10:08.691 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:08.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:08.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:08.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:08.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:08.691 Initialization complete. Launching workers. 00:10:08.691 Starting thread on core 2 00:10:08.691 Starting thread on core 3 00:10:08.691 Starting thread on core 1 00:10:08.691 02:26:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.691 [2024-05-15 02:26:56.076444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:11.977 [2024-05-15 02:26:59.272218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:11.977 Initializing NVMe Controllers 00:10:11.977 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:11.977 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:11.977 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:11.977 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:11.977 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:11.977 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:11.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:11.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:11.977 Initialization complete. Launching workers. 00:10:11.977 Starting thread on core 1 with urgent priority queue 00:10:11.977 Starting thread on core 2 with urgent priority queue 00:10:11.977 Starting thread on core 3 with urgent priority queue 00:10:11.977 Starting thread on core 0 with urgent priority queue 00:10:11.977 SPDK bdev Controller (SPDK1 ) core 0: 3956.00 IO/s 25.28 secs/100000 ios 00:10:11.977 SPDK bdev Controller (SPDK1 ) core 1: 3999.33 IO/s 25.00 secs/100000 ios 00:10:11.977 SPDK bdev Controller (SPDK1 ) core 2: 4145.33 IO/s 24.12 secs/100000 ios 00:10:11.977 SPDK bdev Controller (SPDK1 ) core 3: 4112.00 IO/s 24.32 secs/100000 ios 00:10:11.977 ======================================================== 00:10:11.977 00:10:11.977 02:26:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:11.977 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.235 [2024-05-15 02:26:59.572553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.235 Initializing NVMe Controllers 00:10:12.235 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.235 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.235 Namespace ID: 1 size: 0GB 00:10:12.235 Initialization complete. 00:10:12.235 INFO: using host memory buffer for IO 00:10:12.235 Hello world! 00:10:12.235 [2024-05-15 02:26:59.610166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.493 02:26:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.493 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.751 [2024-05-15 02:26:59.922366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:13.686 Initializing NVMe Controllers 00:10:13.686 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:13.686 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:13.686 Initialization complete. Launching workers. 00:10:13.686 submit (in ns) avg, min, max = 8727.8, 3486.7, 4018064.4 00:10:13.686 complete (in ns) avg, min, max = 24550.3, 2066.7, 6008360.0 00:10:13.686 00:10:13.686 Submit histogram 00:10:13.686 ================ 00:10:13.686 Range in us Cumulative Count 00:10:13.686 3.484 - 3.508: 0.3862% ( 51) 00:10:13.686 3.508 - 3.532: 1.6809% ( 171) 00:10:13.686 3.532 - 3.556: 4.7929% ( 411) 00:10:13.686 3.556 - 3.579: 10.8503% ( 800) 00:10:13.686 3.579 - 3.603: 21.5719% ( 1416) 00:10:13.686 3.603 - 3.627: 31.8770% ( 1361) 00:10:13.686 3.627 - 3.650: 39.1762% ( 964) 00:10:13.686 3.650 - 3.674: 44.3628% ( 685) 00:10:13.686 3.674 - 3.698: 49.4889% ( 677) 00:10:13.686 3.698 - 3.721: 54.8346% ( 706) 00:10:13.686 3.721 - 3.745: 58.7567% ( 518) 00:10:13.686 3.745 - 3.769: 61.3538% ( 343) 00:10:13.686 3.769 - 3.793: 63.8676% ( 332) 00:10:13.686 3.793 - 3.816: 66.7222% ( 377) 00:10:13.686 3.816 - 3.840: 71.0381% ( 570) 00:10:13.686 3.840 - 3.864: 75.3767% ( 573) 00:10:13.686 3.864 - 3.887: 78.6628% ( 434) 00:10:13.686 3.887 - 3.911: 80.8283% ( 286) 00:10:13.686 3.911 - 3.935: 82.8349% ( 265) 00:10:13.686 3.935 - 3.959: 84.3038% ( 194) 00:10:13.686 3.959 - 3.982: 85.9014% ( 211) 00:10:13.686 3.982 - 4.006: 86.9690% ( 141) 00:10:13.686 4.006 - 4.030: 87.9004% ( 123) 00:10:13.686 4.030 - 4.053: 88.6424% ( 98) 00:10:13.686 4.053 - 4.077: 89.5964% ( 126) 00:10:13.686 4.077 - 4.101: 90.3233% ( 96) 00:10:13.686 4.101 - 4.124: 91.0426% ( 95) 00:10:13.686 4.124 - 4.148: 91.4894% ( 59) 00:10:13.686 4.148 - 4.172: 91.7544% ( 35) 00:10:13.686 4.172 - 4.196: 91.9891% ( 31) 00:10:13.686 4.196 - 4.219: 92.1935% ( 27) 00:10:13.686 4.219 - 4.243: 92.3450% ( 20) 00:10:13.686 4.243 - 4.267: 92.4737% ( 17) 00:10:13.686 4.267 - 4.290: 92.7008% ( 30) 00:10:13.686 4.290 - 4.314: 92.8523% ( 20) 00:10:13.686 4.314 - 4.338: 93.0189% ( 22) 00:10:13.686 4.338 - 4.361: 93.1627% ( 19) 00:10:13.686 4.361 - 4.385: 93.3444% ( 24) 00:10:13.686 4.385 - 4.409: 93.5034% ( 21) 00:10:13.686 4.409 - 4.433: 93.6776% ( 23) 00:10:13.686 4.433 - 4.456: 93.7685% ( 12) 00:10:13.686 4.456 - 4.480: 93.8669% ( 13) 00:10:13.686 4.480 - 4.504: 93.9805% ( 15) 00:10:13.686 4.504 - 4.527: 94.0713% ( 12) 00:10:13.686 4.527 - 4.551: 94.1470% ( 10) 00:10:13.686 4.551 - 4.575: 94.2228% ( 10) 00:10:13.686 4.575 - 4.599: 94.3060% ( 11) 00:10:13.686 4.599 - 4.622: 94.3969% ( 12) 00:10:13.686 4.622 - 4.646: 94.4726% ( 10) 00:10:13.686 4.646 - 4.670: 94.5029% ( 4) 00:10:13.686 4.670 - 4.693: 94.5786% ( 10) 00:10:13.686 4.693 - 4.717: 94.6241% ( 6) 00:10:13.686 4.717 - 4.741: 94.6998% ( 10) 00:10:13.686 4.741 - 4.764: 94.7604% ( 8) 00:10:13.686 4.764 - 4.788: 94.8285% ( 9) 00:10:13.686 4.788 - 4.812: 94.9118% ( 11) 00:10:13.686 4.812 - 4.836: 94.9875% ( 10) 00:10:13.686 4.836 - 4.859: 95.0935% ( 14) 00:10:13.686 4.859 - 4.883: 95.1541% ( 8) 00:10:13.686 4.883 - 4.907: 95.2071% ( 7) 00:10:13.686 4.907 - 4.930: 95.2601% ( 7) 00:10:13.686 4.930 - 4.954: 95.3055% ( 6) 00:10:13.686 4.954 - 4.978: 95.3585% ( 7) 00:10:13.686 4.978 - 5.001: 95.3964% ( 5) 00:10:13.686 5.001 - 5.025: 95.4570% ( 8) 00:10:13.686 5.025 - 5.049: 95.5251% ( 9) 00:10:13.686 5.049 - 5.073: 95.6084% ( 11) 00:10:13.686 5.073 - 5.096: 95.6614% ( 7) 00:10:13.686 5.096 - 5.120: 95.7447% ( 11) 00:10:13.686 5.120 - 5.144: 95.8280% ( 11) 00:10:13.686 5.144 - 5.167: 95.9113% ( 11) 00:10:13.686 5.167 - 5.191: 95.9945% ( 11) 00:10:13.686 5.191 - 5.215: 96.0854% ( 12) 00:10:13.686 5.215 - 5.239: 96.1308% ( 6) 00:10:13.686 5.239 - 5.262: 96.1990% ( 9) 00:10:13.686 5.262 - 5.286: 96.2520% ( 7) 00:10:13.686 5.286 - 5.310: 96.2974% ( 6) 00:10:13.686 5.310 - 5.333: 96.3353% ( 5) 00:10:13.686 5.333 - 5.357: 96.3959% ( 8) 00:10:13.686 5.357 - 5.381: 96.4564% ( 8) 00:10:13.686 5.381 - 5.404: 96.4867% ( 4) 00:10:13.686 5.404 - 5.428: 96.5170% ( 4) 00:10:13.686 5.428 - 5.452: 96.5549% ( 5) 00:10:13.686 5.452 - 5.476: 96.5700% ( 2) 00:10:13.686 5.476 - 5.499: 96.6003% ( 4) 00:10:13.686 5.499 - 5.523: 96.6079% ( 1) 00:10:13.686 5.523 - 5.547: 96.6154% ( 1) 00:10:13.686 5.547 - 5.570: 96.6457% ( 4) 00:10:13.686 5.570 - 5.594: 96.6760% ( 4) 00:10:13.686 5.594 - 5.618: 96.6987% ( 3) 00:10:13.686 5.618 - 5.641: 96.7214% ( 3) 00:10:13.686 5.641 - 5.665: 96.7442% ( 3) 00:10:13.686 5.665 - 5.689: 96.7669% ( 3) 00:10:13.686 5.689 - 5.713: 96.7744% ( 1) 00:10:13.686 5.713 - 5.736: 96.7820% ( 1) 00:10:13.686 5.736 - 5.760: 96.7972% ( 2) 00:10:13.686 5.760 - 5.784: 96.8047% ( 1) 00:10:13.686 5.784 - 5.807: 96.8123% ( 1) 00:10:13.686 5.807 - 5.831: 96.8350% ( 3) 00:10:13.686 5.831 - 5.855: 96.8426% ( 1) 00:10:13.686 5.855 - 5.879: 96.8577% ( 2) 00:10:13.686 5.879 - 5.902: 96.8653% ( 1) 00:10:13.686 5.902 - 5.926: 96.8804% ( 2) 00:10:13.686 5.926 - 5.950: 96.9032% ( 3) 00:10:13.686 5.973 - 5.997: 96.9107% ( 1) 00:10:13.686 5.997 - 6.021: 96.9334% ( 3) 00:10:13.686 6.021 - 6.044: 96.9410% ( 1) 00:10:13.687 6.044 - 6.068: 96.9637% ( 3) 00:10:13.687 6.068 - 6.116: 96.9864% ( 3) 00:10:13.687 6.116 - 6.163: 97.0092% ( 3) 00:10:13.687 6.210 - 6.258: 97.0167% ( 1) 00:10:13.687 6.258 - 6.305: 97.0319% ( 2) 00:10:13.687 6.305 - 6.353: 97.0394% ( 1) 00:10:13.687 6.353 - 6.400: 97.0470% ( 1) 00:10:13.687 6.400 - 6.447: 97.0622% ( 2) 00:10:13.687 6.542 - 6.590: 97.0773% ( 2) 00:10:13.687 6.590 - 6.637: 97.0849% ( 1) 00:10:13.687 6.637 - 6.684: 97.1227% ( 5) 00:10:13.687 6.684 - 6.732: 97.1455% ( 3) 00:10:13.687 6.732 - 6.779: 97.1530% ( 1) 00:10:13.687 6.779 - 6.827: 97.1833% ( 4) 00:10:13.687 6.827 - 6.874: 97.2060% ( 3) 00:10:13.687 6.874 - 6.921: 97.2287% ( 3) 00:10:13.687 6.921 - 6.969: 97.2363% ( 1) 00:10:13.687 6.969 - 7.016: 97.2590% ( 3) 00:10:13.687 7.016 - 7.064: 97.2666% ( 1) 00:10:13.687 7.111 - 7.159: 97.2817% ( 2) 00:10:13.687 7.159 - 7.206: 97.3120% ( 4) 00:10:13.687 7.206 - 7.253: 97.3196% ( 1) 00:10:13.687 7.253 - 7.301: 97.3272% ( 1) 00:10:13.687 7.301 - 7.348: 97.3423% ( 2) 00:10:13.687 7.348 - 7.396: 97.3726% ( 4) 00:10:13.687 7.396 - 7.443: 97.3802% ( 1) 00:10:13.687 7.443 - 7.490: 97.3953% ( 2) 00:10:13.687 7.490 - 7.538: 97.4180% ( 3) 00:10:13.687 7.538 - 7.585: 97.4408% ( 3) 00:10:13.687 7.585 - 7.633: 97.4483% ( 1) 00:10:13.687 7.633 - 7.680: 97.4635% ( 2) 00:10:13.687 7.680 - 7.727: 97.4862% ( 3) 00:10:13.687 7.775 - 7.822: 97.5089% ( 3) 00:10:13.687 7.822 - 7.870: 97.5240% ( 2) 00:10:13.687 7.870 - 7.917: 97.5468% ( 3) 00:10:13.687 7.964 - 8.012: 97.5543% ( 1) 00:10:13.687 8.012 - 8.059: 97.5695% ( 2) 00:10:13.687 8.059 - 8.107: 97.5770% ( 1) 00:10:13.687 8.107 - 8.154: 97.5846% ( 1) 00:10:13.687 8.154 - 8.201: 97.5998% ( 2) 00:10:13.687 8.201 - 8.249: 97.6073% ( 1) 00:10:13.687 8.249 - 8.296: 97.6300% ( 3) 00:10:13.687 8.296 - 8.344: 97.6452% ( 2) 00:10:13.687 8.391 - 8.439: 97.6603% ( 2) 00:10:13.687 8.486 - 8.533: 97.6679% ( 1) 00:10:13.687 8.581 - 8.628: 97.6830% ( 2) 00:10:13.687 8.628 - 8.676: 97.6982% ( 2) 00:10:13.687 8.676 - 8.723: 97.7285% ( 4) 00:10:13.687 8.723 - 8.770: 97.7436% ( 2) 00:10:13.687 8.770 - 8.818: 97.7512% ( 1) 00:10:13.687 8.818 - 8.865: 97.7663% ( 2) 00:10:13.687 8.865 - 8.913: 97.7739% ( 1) 00:10:13.687 8.913 - 8.960: 97.7815% ( 1) 00:10:13.687 8.960 - 9.007: 97.7966% ( 2) 00:10:13.687 9.007 - 9.055: 97.8118% ( 2) 00:10:13.687 9.150 - 9.197: 97.8193% ( 1) 00:10:13.687 9.197 - 9.244: 97.8269% ( 1) 00:10:13.687 9.244 - 9.292: 97.8345% ( 1) 00:10:13.687 9.292 - 9.339: 97.8572% ( 3) 00:10:13.687 9.339 - 9.387: 97.8648% ( 1) 00:10:13.687 9.387 - 9.434: 97.8799% ( 2) 00:10:13.687 9.434 - 9.481: 97.8951% ( 2) 00:10:13.687 9.481 - 9.529: 97.9026% ( 1) 00:10:13.687 9.529 - 9.576: 97.9102% ( 1) 00:10:13.687 9.576 - 9.624: 97.9253% ( 2) 00:10:13.687 9.624 - 9.671: 97.9329% ( 1) 00:10:13.687 9.671 - 9.719: 97.9405% ( 1) 00:10:13.687 9.719 - 9.766: 97.9481% ( 1) 00:10:13.687 9.766 - 9.813: 97.9632% ( 2) 00:10:13.687 9.813 - 9.861: 97.9708% ( 1) 00:10:13.687 9.908 - 9.956: 98.0011% ( 4) 00:10:13.687 9.956 - 10.003: 98.0238% ( 3) 00:10:13.687 10.003 - 10.050: 98.0389% ( 2) 00:10:13.687 10.050 - 10.098: 98.0541% ( 2) 00:10:13.687 10.098 - 10.145: 98.0768% ( 3) 00:10:13.687 10.287 - 10.335: 98.0919% ( 2) 00:10:13.687 10.335 - 10.382: 98.0995% ( 1) 00:10:13.687 10.382 - 10.430: 98.1071% ( 1) 00:10:13.687 10.572 - 10.619: 98.1298% ( 3) 00:10:13.687 10.619 - 10.667: 98.1374% ( 1) 00:10:13.687 10.667 - 10.714: 98.1525% ( 2) 00:10:13.687 10.761 - 10.809: 98.1601% ( 1) 00:10:13.687 10.809 - 10.856: 98.1676% ( 1) 00:10:13.687 10.904 - 10.951: 98.1828% ( 2) 00:10:13.687 10.951 - 10.999: 98.1904% ( 1) 00:10:13.687 11.046 - 11.093: 98.2055% ( 2) 00:10:13.687 11.141 - 11.188: 98.2131% ( 1) 00:10:13.687 11.236 - 11.283: 98.2206% ( 1) 00:10:13.687 11.283 - 11.330: 98.2282% ( 1) 00:10:13.687 11.330 - 11.378: 98.2358% ( 1) 00:10:13.687 11.425 - 11.473: 98.2509% ( 2) 00:10:13.687 11.473 - 11.520: 98.2661% ( 2) 00:10:13.687 11.520 - 11.567: 98.2736% ( 1) 00:10:13.687 11.615 - 11.662: 98.2812% ( 1) 00:10:13.687 11.662 - 11.710: 98.3115% ( 4) 00:10:13.687 11.852 - 11.899: 98.3191% ( 1) 00:10:13.687 11.899 - 11.947: 98.3342% ( 2) 00:10:13.687 11.947 - 11.994: 98.3418% ( 1) 00:10:13.687 11.994 - 12.041: 98.3796% ( 5) 00:10:13.687 12.041 - 12.089: 98.3872% ( 1) 00:10:13.687 12.089 - 12.136: 98.3948% ( 1) 00:10:13.687 12.136 - 12.231: 98.4326% ( 5) 00:10:13.687 12.231 - 12.326: 98.4554% ( 3) 00:10:13.687 12.326 - 12.421: 98.4932% ( 5) 00:10:13.687 12.610 - 12.705: 98.5084% ( 2) 00:10:13.687 12.705 - 12.800: 98.5235% ( 2) 00:10:13.687 12.800 - 12.895: 98.5387% ( 2) 00:10:13.687 12.895 - 12.990: 98.5538% ( 2) 00:10:13.687 12.990 - 13.084: 98.5689% ( 2) 00:10:13.687 13.084 - 13.179: 98.5917% ( 3) 00:10:13.687 13.179 - 13.274: 98.6068% ( 2) 00:10:13.687 13.274 - 13.369: 98.6219% ( 2) 00:10:13.687 13.369 - 13.464: 98.6522% ( 4) 00:10:13.687 13.559 - 13.653: 98.6598% ( 1) 00:10:13.687 13.653 - 13.748: 98.6674% ( 1) 00:10:13.687 13.748 - 13.843: 98.6749% ( 1) 00:10:13.687 13.938 - 14.033: 98.6977% ( 3) 00:10:13.687 14.033 - 14.127: 98.7204% ( 3) 00:10:13.687 14.127 - 14.222: 98.7355% ( 2) 00:10:13.687 14.222 - 14.317: 98.7658% ( 4) 00:10:13.687 14.317 - 14.412: 98.7734% ( 1) 00:10:13.687 14.412 - 14.507: 98.8037% ( 4) 00:10:13.687 14.507 - 14.601: 98.8264% ( 3) 00:10:13.687 14.601 - 14.696: 98.8340% ( 1) 00:10:13.687 14.696 - 14.791: 98.8567% ( 3) 00:10:13.687 14.791 - 14.886: 98.8870% ( 4) 00:10:13.687 14.886 - 14.981: 98.9172% ( 4) 00:10:13.687 14.981 - 15.076: 98.9324% ( 2) 00:10:13.687 15.076 - 15.170: 98.9475% ( 2) 00:10:13.687 15.360 - 15.455: 98.9551% ( 1) 00:10:13.687 16.024 - 16.119: 98.9627% ( 1) 00:10:13.687 16.213 - 16.308: 98.9702% ( 1) 00:10:13.687 16.308 - 16.403: 98.9778% ( 1) 00:10:13.687 17.067 - 17.161: 98.9854% ( 1) 00:10:13.687 17.256 - 17.351: 99.0005% ( 2) 00:10:13.687 17.351 - 17.446: 99.0157% ( 2) 00:10:13.687 17.541 - 17.636: 99.0384% ( 3) 00:10:13.687 17.636 - 17.730: 99.0914% ( 7) 00:10:13.687 17.730 - 17.825: 99.1141% ( 3) 00:10:13.687 17.825 - 17.920: 99.1444% ( 4) 00:10:13.687 17.920 - 18.015: 99.2050% ( 8) 00:10:13.687 18.015 - 18.110: 99.2353% ( 4) 00:10:13.687 18.110 - 18.204: 99.3110% ( 10) 00:10:13.687 18.204 - 18.299: 99.3640% ( 7) 00:10:13.687 18.299 - 18.394: 99.4094% ( 6) 00:10:13.687 18.394 - 18.489: 99.4624% ( 7) 00:10:13.687 18.489 - 18.584: 99.5154% ( 7) 00:10:13.687 18.584 - 18.679: 99.5836% ( 9) 00:10:13.687 18.679 - 18.773: 99.6214% ( 5) 00:10:13.687 18.773 - 18.868: 99.6517% ( 4) 00:10:13.687 18.868 - 18.963: 99.6744% ( 3) 00:10:13.687 18.963 - 19.058: 99.6896% ( 2) 00:10:13.687 19.058 - 19.153: 99.7198% ( 4) 00:10:13.687 19.247 - 19.342: 99.7274% ( 1) 00:10:13.687 19.437 - 19.532: 99.7350% ( 1) 00:10:13.687 19.532 - 19.627: 99.7501% ( 2) 00:10:13.687 19.721 - 19.816: 99.7653% ( 2) 00:10:13.687 20.196 - 20.290: 99.7728% ( 1) 00:10:13.687 20.480 - 20.575: 99.7804% ( 1) 00:10:13.687 20.764 - 20.859: 99.7880% ( 1) 00:10:13.687 21.239 - 21.333: 99.7956% ( 1) 00:10:13.687 21.713 - 21.807: 99.8031% ( 1) 00:10:13.687 22.566 - 22.661: 99.8107% ( 1) 00:10:13.687 23.135 - 23.230: 99.8183% ( 1) 00:10:13.687 23.893 - 23.988: 99.8258% ( 1) 00:10:13.687 24.462 - 24.652: 99.8334% ( 1) 00:10:13.687 24.652 - 24.841: 99.8410% ( 1) 00:10:13.687 26.927 - 27.117: 99.8486% ( 1) 00:10:13.687 28.065 - 28.255: 99.8561% ( 1) 00:10:13.687 29.013 - 29.203: 99.8637% ( 1) 00:10:13.687 30.151 - 30.341: 99.8713% ( 1) 00:10:13.687 32.047 - 32.237: 99.8789% ( 1) 00:10:13.687 1808.308 - 1820.444: 99.8864% ( 1) 00:10:13.687 3980.705 - 4004.978: 99.9546% ( 9) 00:10:13.687 4004.978 - 4029.250: 100.0000% ( 6) 00:10:13.687 00:10:13.687 Complete histogram 00:10:13.687 ================== 00:10:13.687 Range in us Cumulative Count 00:10:13.687 2.062 - 2.074: 0.1666% ( 22) 00:10:13.687 2.074 - 2.086: 10.0174% ( 1301) 00:10:13.687 2.086 - 2.098: 16.5518% ( 863) 00:10:13.687 2.098 - 2.110: 20.3831% ( 506) 00:10:13.687 2.110 - 2.121: 49.4283% ( 3836) 00:10:13.687 2.121 - 2.133: 55.8795% ( 852) 00:10:13.687 2.133 - 2.145: 58.2646% ( 315) 00:10:13.687 2.145 - 2.157: 63.3452% ( 671) 00:10:13.687 2.157 - 2.169: 64.7611% ( 187) 00:10:13.687 2.169 - 2.181: 67.9185% ( 417) 00:10:13.688 2.181 - 2.193: 76.4822% ( 1131) 00:10:13.688 2.193 - 2.204: 78.4357% ( 258) 00:10:13.688 2.204 - 2.216: 79.4427% ( 133) 00:10:13.688 2.216 - 2.228: 81.7067% ( 299) 00:10:13.688 2.228 - 2.240: 83.8343% ( 281) 00:10:13.688 2.240 - 2.252: 85.4698% ( 216) 00:10:13.688 2.252 - 2.264: 89.2254% ( 496) 00:10:13.688 2.264 - 2.276: 90.5353% ( 173) 00:10:13.688 2.276 - 2.287: 91.0048% ( 62) 00:10:13.688 2.287 - 2.299: 91.5348% ( 70) 00:10:13.688 2.299 - 2.311: 92.2162% ( 90) 00:10:13.688 2.311 - 2.323: 92.6024% ( 51) 00:10:13.688 2.323 - 2.335: 92.7236% ( 16) 00:10:13.688 2.335 - 2.347: 92.8296% ( 14) 00:10:13.688 2.347 - 2.359: 92.9128% ( 11) 00:10:13.688 2.359 - 2.370: 92.9659% ( 7) 00:10:13.688 2.370 - 2.382: 93.1249% ( 21) 00:10:13.688 2.382 - 2.394: 93.4807% ( 47) 00:10:13.688 2.394 - 2.406: 93.6852% ( 27) 00:10:13.688 2.406 - 2.418: 93.8820% ( 26) 00:10:13.688 2.418 - 2.430: 94.1546% ( 36) 00:10:13.688 2.430 - 2.441: 94.3666% ( 28) 00:10:13.688 2.441 - 2.453: 94.5862% ( 29) 00:10:13.688 2.453 - 2.465: 94.8209% ( 31) 00:10:13.688 2.465 - 2.477: 94.9799% ( 21) 00:10:13.688 2.477 - 2.489: 95.2071% ( 30) 00:10:13.688 2.489 - 2.501: 95.3055% ( 13) 00:10:13.688 2.501 - 2.513: 95.4494% ( 19) 00:10:13.688 2.513 - 2.524: 95.5932% ( 19) 00:10:13.688 2.524 - 2.536: 95.6765% ( 11) 00:10:13.688 2.536 - 2.548: 95.7674% ( 12) 00:10:13.688 2.548 - 2.560: 95.8053% ( 5) 00:10:13.688 2.560 - 2.572: 95.8961% ( 12) 00:10:13.688 2.572 - 2.584: 96.0097% ( 15) 00:10:13.688 2.584 - 2.596: 96.0703% ( 8) 00:10:13.688 2.596 - 2.607: 96.2217% ( 20) 00:10:13.688 2.607 - 2.619: 96.3353% ( 15) 00:10:13.688 2.619 - 2.631: 96.4110% ( 10) 00:10:13.688 2.631 - 2.643: 96.4716% ( 8) 00:10:13.688 2.643 - 2.655: 96.5170% ( 6) 00:10:13.688 2.655 - 2.667: 96.5473% ( 4) 00:10:13.688 2.667 - 2.679: 96.5927% ( 6) 00:10:13.688 2.679 - 2.690: 96.6306% ( 5) 00:10:13.688 2.690 - 2.702: 96.6684% ( 5) 00:10:13.688 2.702 - 2.714: 96.7290% ( 8) 00:10:13.688 2.714 - 2.726: 96.8274% ( 13) 00:10:13.688 2.726 - 2.738: 96.8577% ( 4) 00:10:13.688 2.738 - 2.750: 96.9486% ( 12) 00:10:13.688 2.750 - 2.761: 96.9713% ( 3) 00:10:13.688 2.761 - 2.773: 96.9940% ( 3) 00:10:13.688 2.773 - 2.785: 97.0622% ( 9) 00:10:13.688 2.785 - 2.797: 97.0849% ( 3) 00:10:13.688 2.797 - 2.809: 97.0925% ( 1) 00:10:13.688 2.809 - 2.821: 97.1303% ( 5) 00:10:13.688 2.821 - 2.833: 97.1606% ( 4) 00:10:13.688 2.833 - 2.844: 97.1682% ( 1) 00:10:13.688 2.844 - 2.856: 97.1757% ( 1) 00:10:13.688 2.856 - 2.868: 97.1909% ( 2) 00:10:13.688 2.868 - 2.880: 97.2060% ( 2) 00:10:13.688 2.880 - 2.892: 97.2515% ( 6) 00:10:13.688 2.892 - 2.904: 97.2817% ( 4) 00:10:13.688 2.904 - 2.916: 97.3045% ( 3) 00:10:13.688 2.916 - 2.927: 97.3347% ( 4) 00:10:13.688 2.927 - 2.939: 97.3650% ( 4) 00:10:13.688 2.939 - 2.951: 97.4029% ( 5) 00:10:13.688 2.963 - 2.975: 97.4332% ( 4) 00:10:13.688 2.975 - 2.987: 97.4483% ( 2) 00:10:13.688 2.987 - 2.999: 97.4559% ( 1) 00:10:13.688 2.999 - 3.010: 97.4862% ( 4) 00:10:13.688 3.010 - 3.022: 97.5013% ( 2) 00:10:13.688 3.022 - 3.034: 97.5392% ( 5) 00:10:13.688 3.034 - 3.058: 97.5770% ( 5) 00:10:13.688 3.058 - 3.081: 97.6376% ( 8) 00:10:13.688 3.081 - 3.105: 97.6906% ( 7) 00:10:13.688 3.105 - 3.129: 97.7739% ( 11) 00:10:13.688 3.129 - 3.153: 97.7966% ( 3) 00:10:13.688 3.153 - 3.176: 97.8421% ( 6) 00:10:13.688 3.176 - 3.200: 97.8648% ( 3) 00:10:13.688 3.200 - 3.224: 97.9026% ( 5) 00:10:13.688 3.224 - 3.247: 97.9632% ( 8) 00:10:13.688 3.247 - 3.271: 98.0086% ( 6) 00:10:13.688 3.271 - 3.295: 98.0616% ( 7) 00:10:13.688 3.295 - 3.319: 98.1146% ( 7) 00:10:13.688 3.319 - 3.342: 98.1374% ( 3) 00:10:13.688 3.342 - 3.366: 98.1752% ( 5) 00:10:13.688 3.366 - 3.390: 98.1828% ( 1) 00:10:13.688 3.390 - 3.413: 98.2055% ( 3) 00:10:13.688 3.413 - 3.437: 98.2434% ( 5) 00:10:13.688 3.437 - 3.461: 98.2812% ( 5) 00:10:13.688 3.461 - 3.484: 98.2888% ( 1) 00:10:13.688 3.484 - 3.508: 98.3342% ( 6) 00:10:13.688 3.508 - 3.532: 98.3494% ( 2) 00:10:13.688 3.556 - 3.579: 98.3872% ( 5) 00:10:13.688 3.579 - 3.603: 98.4251% ( 5) 00:10:13.688 3.627 - 3.650: 98.4629% ( 5) 00:10:13.688 3.650 - 3.674: 98.4705% ( 1) 00:10:13.688 3.674 - 3.698: 98.4932% ( 3) 00:10:13.688 3.698 - 3.721: 98.5235% ( 4) 00:10:13.688 3.721 - 3.745: 98.5387% ( 2) 00:10:13.688 3.745 - 3.769: 98.5614% ( 3) 00:10:13.688 3.769 - 3.793: 98.5689% ( 1) 00:10:13.688 3.793 - 3.816: 98.5917% ( 3) 00:10:13.688 3.840 - 3.864: 98.6068% ( 2) 00:10:13.688 3.864 - 3.887: 98.6144% ( 1) 00:10:13.688 3.887 - 3.911: 98.6219% ( 1) 00:10:13.688 3.911 - 3.935: 98.6371% ( 2) 00:10:13.688 3.959 - 3.982: 98.6522% ( 2) 00:10:13.688 4.006 - 4.030: 98.6598% ( 1) 00:10:13.688 4.030 - 4.053: 98.6674% ( 1) 00:10:13.688 4.124 - 4.148: 98.6749% ( 1) 00:10:13.688 4.219 - 4.243: 98.6825% ( 1) 00:10:13.688 4.338 - 4.361: 98.6901% ( 1) 00:10:13.688 4.385 - 4.409: 98.6977% ( 1) 00:10:13.688 4.409 - 4.433: 98.7128% ( 2) 00:10:13.688 4.575 - 4.599: 98.7204% ( 1) 00:10:13.688 4.788 - 4.812: 98.7279% ( 1) 00:10:13.688 4.930 - 4.954: 98.7355% ( 1) 00:10:13.688 5.191 - 5.215: 98.7431% ( 1) 00:10:13.688 5.286 - 5.310: 98.7507% ( 1) 00:10:13.688 5.381 - 5.404: 98.7582% ( 1) 00:10:13.688 5.831 - 5.855: 98.7734% ( 2) 00:10:13.688 5.879 - 5.902: 98.7809% ( 1) 00:10:13.688 5.950 - 5.973: 98.7885% ( 1) 00:10:13.688 6.068 - 6.116: 98.7961% ( 1) 00:10:13.688 6.163 - 6.210: 98.8112% ( 2) 00:10:13.688 6.258 - 6.305: 98.8188% ( 1) 00:10:13.688 6.305 - 6.353: 98.8264% ( 1) 00:10:13.688 6.353 - 6.400: 98.8415% ( 2) 00:10:13.688 6.637 - 6.684: 98.8491% ( 1) 00:10:13.688 6.874 - 6.921: 98.8567% ( 1) 00:10:13.688 7.064 - 7.111: 98.8642% ( 1) 00:10:13.688 7.111 - 7.159: 98.8718% ( 1) 00:10:13.688 7.159 - 7.206: 98.8794% ( 1) 00:10:13.688 7.538 - 7.585: 98.8945% ( [2024-05-15 02:27:00.946658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:13.688 2) 00:10:13.688 7.775 - 7.822: 98.9021% ( 1) 00:10:13.688 8.296 - 8.344: 98.9097% ( 1) 00:10:13.688 8.770 - 8.818: 98.9172% ( 1) 00:10:13.688 8.865 - 8.913: 98.9248% ( 1) 00:10:13.688 10.003 - 10.050: 98.9324% ( 1) 00:10:13.688 10.193 - 10.240: 98.9400% ( 1) 00:10:13.688 10.572 - 10.619: 98.9475% ( 1) 00:10:13.688 11.330 - 11.378: 98.9551% ( 1) 00:10:13.688 12.231 - 12.326: 98.9627% ( 1) 00:10:13.688 12.610 - 12.705: 98.9702% ( 1) 00:10:13.688 13.084 - 13.179: 98.9778% ( 1) 00:10:13.688 13.843 - 13.938: 98.9854% ( 1) 00:10:13.688 15.739 - 15.834: 99.0005% ( 2) 00:10:13.688 15.834 - 15.929: 99.0232% ( 3) 00:10:13.688 15.929 - 16.024: 99.0308% ( 1) 00:10:13.688 16.024 - 16.119: 99.0535% ( 3) 00:10:13.688 16.119 - 16.213: 99.0990% ( 6) 00:10:13.688 16.213 - 16.308: 99.1595% ( 8) 00:10:13.688 16.308 - 16.403: 99.1823% ( 3) 00:10:13.688 16.403 - 16.498: 99.1974% ( 2) 00:10:13.688 16.498 - 16.593: 99.2731% ( 10) 00:10:13.688 16.593 - 16.687: 99.3337% ( 8) 00:10:13.688 16.687 - 16.782: 99.3488% ( 2) 00:10:13.688 16.782 - 16.877: 99.3564% ( 1) 00:10:13.688 17.067 - 17.161: 99.3791% ( 3) 00:10:13.688 17.256 - 17.351: 99.3867% ( 1) 00:10:13.688 17.541 - 17.636: 99.4018% ( 2) 00:10:13.688 18.679 - 18.773: 99.4094% ( 1) 00:10:13.688 20.101 - 20.196: 99.4170% ( 1) 00:10:13.688 21.807 - 21.902: 99.4245% ( 1) 00:10:13.688 22.376 - 22.471: 99.4321% ( 1) 00:10:13.688 36.409 - 36.599: 99.4397% ( 1) 00:10:13.688 1341.061 - 1347.129: 99.4473% ( 1) 00:10:13.688 1638.400 - 1650.536: 99.4548% ( 1) 00:10:13.688 1662.673 - 1674.809: 99.4624% ( 1) 00:10:13.688 2742.803 - 2754.939: 99.4700% ( 1) 00:10:13.688 3058.347 - 3070.483: 99.4775% ( 1) 00:10:13.688 3131.164 - 3155.437: 99.4851% ( 1) 00:10:13.688 3713.707 - 3737.979: 99.4927% ( 1) 00:10:13.688 3980.705 - 4004.978: 99.8561% ( 48) 00:10:13.688 4004.978 - 4029.250: 99.9621% ( 14) 00:10:13.688 5000.154 - 5024.427: 99.9697% ( 1) 00:10:13.688 5971.058 - 5995.330: 99.9849% ( 2) 00:10:13.688 5995.330 - 6019.603: 100.0000% ( 2) 00:10:13.688 00:10:13.688 02:27:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:13.688 02:27:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:13.688 02:27:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:13.688 02:27:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:13.688 02:27:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:13.946 [ 00:10:13.946 { 00:10:13.946 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:13.946 "subtype": "Discovery", 00:10:13.946 "listen_addresses": [], 00:10:13.946 "allow_any_host": true, 00:10:13.946 "hosts": [] 00:10:13.946 }, 00:10:13.946 { 00:10:13.946 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:13.946 "subtype": "NVMe", 00:10:13.946 "listen_addresses": [ 00:10:13.946 { 00:10:13.946 "trtype": "VFIOUSER", 00:10:13.946 "adrfam": "IPv4", 00:10:13.946 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:13.946 "trsvcid": "0" 00:10:13.946 } 00:10:13.946 ], 00:10:13.946 "allow_any_host": true, 00:10:13.946 "hosts": [], 00:10:13.946 "serial_number": "SPDK1", 00:10:13.946 "model_number": "SPDK bdev Controller", 00:10:13.946 "max_namespaces": 32, 00:10:13.946 "min_cntlid": 1, 00:10:13.946 "max_cntlid": 65519, 00:10:13.946 "namespaces": [ 00:10:13.946 { 00:10:13.946 "nsid": 1, 00:10:13.946 "bdev_name": "Malloc1", 00:10:13.946 "name": "Malloc1", 00:10:13.946 "nguid": "65EF873AFEB24B5E98B739B1C2B43901", 00:10:13.946 "uuid": "65ef873a-feb2-4b5e-98b7-39b1c2b43901" 00:10:13.946 } 00:10:13.946 ] 00:10:13.946 }, 00:10:13.946 { 00:10:13.946 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:13.946 "subtype": "NVMe", 00:10:13.946 "listen_addresses": [ 00:10:13.946 { 00:10:13.946 "trtype": "VFIOUSER", 00:10:13.946 "adrfam": "IPv4", 00:10:13.946 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:13.946 "trsvcid": "0" 00:10:13.946 } 00:10:13.946 ], 00:10:13.946 "allow_any_host": true, 00:10:13.946 "hosts": [], 00:10:13.947 "serial_number": "SPDK2", 00:10:13.947 "model_number": "SPDK bdev Controller", 00:10:13.947 "max_namespaces": 32, 00:10:13.947 "min_cntlid": 1, 00:10:13.947 "max_cntlid": 65519, 00:10:13.947 "namespaces": [ 00:10:13.947 { 00:10:13.947 "nsid": 1, 00:10:13.947 "bdev_name": "Malloc2", 00:10:13.947 "name": "Malloc2", 00:10:13.947 "nguid": "4358F44CE9AF40B899C186FD20406CE8", 00:10:13.947 "uuid": "4358f44c-e9af-40b8-99c1-86fd20406ce8" 00:10:13.947 } 00:10:13.947 ] 00:10:13.947 } 00:10:13.947 ] 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2254404 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:13.947 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:13.947 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.205 [2024-05-15 02:27:01.436479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.205 Malloc3 00:10:14.205 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:14.463 [2024-05-15 02:27:01.796104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.463 02:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.463 Asynchronous Event Request test 00:10:14.463 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.463 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.463 Registering asynchronous event callbacks... 00:10:14.463 Starting namespace attribute notice tests for all controllers... 00:10:14.463 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:14.463 aer_cb - Changed Namespace 00:10:14.463 Cleaning up... 00:10:14.721 [ 00:10:14.721 { 00:10:14.721 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.721 "subtype": "Discovery", 00:10:14.721 "listen_addresses": [], 00:10:14.721 "allow_any_host": true, 00:10:14.721 "hosts": [] 00:10:14.721 }, 00:10:14.721 { 00:10:14.721 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.721 "subtype": "NVMe", 00:10:14.721 "listen_addresses": [ 00:10:14.721 { 00:10:14.721 "trtype": "VFIOUSER", 00:10:14.721 "adrfam": "IPv4", 00:10:14.721 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.721 "trsvcid": "0" 00:10:14.721 } 00:10:14.721 ], 00:10:14.721 "allow_any_host": true, 00:10:14.721 "hosts": [], 00:10:14.721 "serial_number": "SPDK1", 00:10:14.721 "model_number": "SPDK bdev Controller", 00:10:14.721 "max_namespaces": 32, 00:10:14.721 "min_cntlid": 1, 00:10:14.721 "max_cntlid": 65519, 00:10:14.721 "namespaces": [ 00:10:14.721 { 00:10:14.721 "nsid": 1, 00:10:14.721 "bdev_name": "Malloc1", 00:10:14.721 "name": "Malloc1", 00:10:14.721 "nguid": "65EF873AFEB24B5E98B739B1C2B43901", 00:10:14.721 "uuid": "65ef873a-feb2-4b5e-98b7-39b1c2b43901" 00:10:14.721 }, 00:10:14.721 { 00:10:14.721 "nsid": 2, 00:10:14.721 "bdev_name": "Malloc3", 00:10:14.721 "name": "Malloc3", 00:10:14.721 "nguid": "47FC2F9C36D34A5DB1E2916C42BBB880", 00:10:14.721 "uuid": "47fc2f9c-36d3-4a5d-b1e2-916c42bbb880" 00:10:14.721 } 00:10:14.721 ] 00:10:14.721 }, 00:10:14.721 { 00:10:14.721 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.721 "subtype": "NVMe", 00:10:14.721 "listen_addresses": [ 00:10:14.721 { 00:10:14.721 "trtype": "VFIOUSER", 00:10:14.721 "adrfam": "IPv4", 00:10:14.721 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.721 "trsvcid": "0" 00:10:14.721 } 00:10:14.721 ], 00:10:14.721 "allow_any_host": true, 00:10:14.721 "hosts": [], 00:10:14.721 "serial_number": "SPDK2", 00:10:14.721 "model_number": "SPDK bdev Controller", 00:10:14.721 "max_namespaces": 32, 00:10:14.721 "min_cntlid": 1, 00:10:14.721 "max_cntlid": 65519, 00:10:14.721 "namespaces": [ 00:10:14.721 { 00:10:14.721 "nsid": 1, 00:10:14.721 "bdev_name": "Malloc2", 00:10:14.721 "name": "Malloc2", 00:10:14.721 "nguid": "4358F44CE9AF40B899C186FD20406CE8", 00:10:14.721 "uuid": "4358f44c-e9af-40b8-99c1-86fd20406ce8" 00:10:14.721 } 00:10:14.721 ] 00:10:14.721 } 00:10:14.721 ] 00:10:14.721 02:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2254404 00:10:14.721 02:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:14.721 02:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:14.721 02:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:14.721 02:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:14.721 [2024-05-15 02:27:02.076108] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:10:14.721 [2024-05-15 02:27:02.076148] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254452 ] 00:10:14.721 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.721 [2024-05-15 02:27:02.110064] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:14.721 [2024-05-15 02:27:02.118259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:14.721 [2024-05-15 02:27:02.118288] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc79f8eb000 00:10:14.722 [2024-05-15 02:27:02.119250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.120241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.121261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.122271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.123260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.124267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.125271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.127955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:14.722 [2024-05-15 02:27:02.128310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:14.722 [2024-05-15 02:27:02.128346] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc79f8e0000 00:10:14.722 [2024-05-15 02:27:02.129460] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:14.982 [2024-05-15 02:27:02.144493] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:14.982 [2024-05-15 02:27:02.144528] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:14.982 [2024-05-15 02:27:02.149649] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:14.982 [2024-05-15 02:27:02.149704] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:14.982 [2024-05-15 02:27:02.149800] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:14.982 [2024-05-15 02:27:02.149827] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:14.982 [2024-05-15 02:27:02.149838] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:14.982 [2024-05-15 02:27:02.150651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:14.982 [2024-05-15 02:27:02.150673] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:14.982 [2024-05-15 02:27:02.150685] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:14.982 [2024-05-15 02:27:02.151654] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:14.982 [2024-05-15 02:27:02.151674] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:14.982 [2024-05-15 02:27:02.151688] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:14.982 [2024-05-15 02:27:02.152667] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:14.982 [2024-05-15 02:27:02.152688] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:14.982 [2024-05-15 02:27:02.153673] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:14.982 [2024-05-15 02:27:02.153693] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:14.982 [2024-05-15 02:27:02.153702] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:14.982 [2024-05-15 02:27:02.153714] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:14.982 [2024-05-15 02:27:02.153824] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:14.982 [2024-05-15 02:27:02.153832] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:14.982 [2024-05-15 02:27:02.153840] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:14.982 [2024-05-15 02:27:02.154685] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:14.982 [2024-05-15 02:27:02.155687] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:14.982 [2024-05-15 02:27:02.156695] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:14.982 [2024-05-15 02:27:02.157688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:14.982 [2024-05-15 02:27:02.157770] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:14.982 [2024-05-15 02:27:02.158706] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:14.982 [2024-05-15 02:27:02.158726] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:14.982 [2024-05-15 02:27:02.158739] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:14.982 [2024-05-15 02:27:02.158763] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:14.982 [2024-05-15 02:27:02.158776] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:14.982 [2024-05-15 02:27:02.158797] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:14.982 [2024-05-15 02:27:02.158806] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:14.982 [2024-05-15 02:27:02.158827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:14.982 [2024-05-15 02:27:02.166946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:14.982 [2024-05-15 02:27:02.166982] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:14.982 [2024-05-15 02:27:02.166991] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:14.982 [2024-05-15 02:27:02.166999] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:14.983 [2024-05-15 02:27:02.167007] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:14.983 [2024-05-15 02:27:02.167015] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:14.983 [2024-05-15 02:27:02.167023] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:14.983 [2024-05-15 02:27:02.167031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.167049] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.167069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.174943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.174968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.983 [2024-05-15 02:27:02.174986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.983 [2024-05-15 02:27:02.174999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.983 [2024-05-15 02:27:02.175010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.983 [2024-05-15 02:27:02.175019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.175035] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.175050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.182954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.182974] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:14.983 [2024-05-15 02:27:02.182997] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.183010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.183023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.183037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.190941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.191005] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.191022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.191036] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:14.983 [2024-05-15 02:27:02.191044] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:14.983 [2024-05-15 02:27:02.191054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.198943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.198971] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:14.983 [2024-05-15 02:27:02.198990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.199005] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.199017] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:14.983 [2024-05-15 02:27:02.199026] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:14.983 [2024-05-15 02:27:02.199036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.206954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.206987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.207003] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.207017] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:14.983 [2024-05-15 02:27:02.207025] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:14.983 [2024-05-15 02:27:02.207035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.214956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.214996] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.215016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.215031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.215042] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.215050] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.215059] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:14.983 [2024-05-15 02:27:02.215067] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:14.983 [2024-05-15 02:27:02.215075] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:14.983 [2024-05-15 02:27:02.215106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.222956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.222995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.230957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.230983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.238956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.238980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.246939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.246966] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:14.983 [2024-05-15 02:27:02.246976] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:14.983 [2024-05-15 02:27:02.246983] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:14.983 [2024-05-15 02:27:02.246989] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:14.983 [2024-05-15 02:27:02.246998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:14.983 [2024-05-15 02:27:02.247010] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:14.983 [2024-05-15 02:27:02.247018] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:14.983 [2024-05-15 02:27:02.247027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.247038] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:14.983 [2024-05-15 02:27:02.247046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:14.983 [2024-05-15 02:27:02.247055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.247072] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:14.983 [2024-05-15 02:27:02.247084] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:14.983 [2024-05-15 02:27:02.247094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:14.983 [2024-05-15 02:27:02.254945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.254973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.254991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:14.983 [2024-05-15 02:27:02.255005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:14.983 ===================================================== 00:10:14.983 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:14.983 ===================================================== 00:10:14.983 Controller Capabilities/Features 00:10:14.983 ================================ 00:10:14.983 Vendor ID: 4e58 00:10:14.983 Subsystem Vendor ID: 4e58 00:10:14.983 Serial Number: SPDK2 00:10:14.983 Model Number: SPDK bdev Controller 00:10:14.983 Firmware Version: 24.05 00:10:14.983 Recommended Arb Burst: 6 00:10:14.983 IEEE OUI Identifier: 8d 6b 50 00:10:14.983 Multi-path I/O 00:10:14.983 May have multiple subsystem ports: Yes 00:10:14.983 May have multiple controllers: Yes 00:10:14.983 Associated with SR-IOV VF: No 00:10:14.983 Max Data Transfer Size: 131072 00:10:14.983 Max Number of Namespaces: 32 00:10:14.983 Max Number of I/O Queues: 127 00:10:14.983 NVMe Specification Version (VS): 1.3 00:10:14.983 NVMe Specification Version (Identify): 1.3 00:10:14.983 Maximum Queue Entries: 256 00:10:14.983 Contiguous Queues Required: Yes 00:10:14.983 Arbitration Mechanisms Supported 00:10:14.984 Weighted Round Robin: Not Supported 00:10:14.984 Vendor Specific: Not Supported 00:10:14.984 Reset Timeout: 15000 ms 00:10:14.984 Doorbell Stride: 4 bytes 00:10:14.984 NVM Subsystem Reset: Not Supported 00:10:14.984 Command Sets Supported 00:10:14.984 NVM Command Set: Supported 00:10:14.984 Boot Partition: Not Supported 00:10:14.984 Memory Page Size Minimum: 4096 bytes 00:10:14.984 Memory Page Size Maximum: 4096 bytes 00:10:14.984 Persistent Memory Region: Not Supported 00:10:14.984 Optional Asynchronous Events Supported 00:10:14.984 Namespace Attribute Notices: Supported 00:10:14.984 Firmware Activation Notices: Not Supported 00:10:14.984 ANA Change Notices: Not Supported 00:10:14.984 PLE Aggregate Log Change Notices: Not Supported 00:10:14.984 LBA Status Info Alert Notices: Not Supported 00:10:14.984 EGE Aggregate Log Change Notices: Not Supported 00:10:14.984 Normal NVM Subsystem Shutdown event: Not Supported 00:10:14.984 Zone Descriptor Change Notices: Not Supported 00:10:14.984 Discovery Log Change Notices: Not Supported 00:10:14.984 Controller Attributes 00:10:14.984 128-bit Host Identifier: Supported 00:10:14.984 Non-Operational Permissive Mode: Not Supported 00:10:14.984 NVM Sets: Not Supported 00:10:14.984 Read Recovery Levels: Not Supported 00:10:14.984 Endurance Groups: Not Supported 00:10:14.984 Predictable Latency Mode: Not Supported 00:10:14.984 Traffic Based Keep ALive: Not Supported 00:10:14.984 Namespace Granularity: Not Supported 00:10:14.984 SQ Associations: Not Supported 00:10:14.984 UUID List: Not Supported 00:10:14.984 Multi-Domain Subsystem: Not Supported 00:10:14.984 Fixed Capacity Management: Not Supported 00:10:14.984 Variable Capacity Management: Not Supported 00:10:14.984 Delete Endurance Group: Not Supported 00:10:14.984 Delete NVM Set: Not Supported 00:10:14.984 Extended LBA Formats Supported: Not Supported 00:10:14.984 Flexible Data Placement Supported: Not Supported 00:10:14.984 00:10:14.984 Controller Memory Buffer Support 00:10:14.984 ================================ 00:10:14.984 Supported: No 00:10:14.984 00:10:14.984 Persistent Memory Region Support 00:10:14.984 ================================ 00:10:14.984 Supported: No 00:10:14.984 00:10:14.984 Admin Command Set Attributes 00:10:14.984 ============================ 00:10:14.984 Security Send/Receive: Not Supported 00:10:14.984 Format NVM: Not Supported 00:10:14.984 Firmware Activate/Download: Not Supported 00:10:14.984 Namespace Management: Not Supported 00:10:14.984 Device Self-Test: Not Supported 00:10:14.984 Directives: Not Supported 00:10:14.984 NVMe-MI: Not Supported 00:10:14.984 Virtualization Management: Not Supported 00:10:14.984 Doorbell Buffer Config: Not Supported 00:10:14.984 Get LBA Status Capability: Not Supported 00:10:14.984 Command & Feature Lockdown Capability: Not Supported 00:10:14.984 Abort Command Limit: 4 00:10:14.984 Async Event Request Limit: 4 00:10:14.984 Number of Firmware Slots: N/A 00:10:14.984 Firmware Slot 1 Read-Only: N/A 00:10:14.984 Firmware Activation Without Reset: N/A 00:10:14.984 Multiple Update Detection Support: N/A 00:10:14.984 Firmware Update Granularity: No Information Provided 00:10:14.984 Per-Namespace SMART Log: No 00:10:14.984 Asymmetric Namespace Access Log Page: Not Supported 00:10:14.984 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:14.984 Command Effects Log Page: Supported 00:10:14.984 Get Log Page Extended Data: Supported 00:10:14.984 Telemetry Log Pages: Not Supported 00:10:14.984 Persistent Event Log Pages: Not Supported 00:10:14.984 Supported Log Pages Log Page: May Support 00:10:14.984 Commands Supported & Effects Log Page: Not Supported 00:10:14.984 Feature Identifiers & Effects Log Page:May Support 00:10:14.984 NVMe-MI Commands & Effects Log Page: May Support 00:10:14.984 Data Area 4 for Telemetry Log: Not Supported 00:10:14.984 Error Log Page Entries Supported: 128 00:10:14.984 Keep Alive: Supported 00:10:14.984 Keep Alive Granularity: 10000 ms 00:10:14.984 00:10:14.984 NVM Command Set Attributes 00:10:14.984 ========================== 00:10:14.984 Submission Queue Entry Size 00:10:14.984 Max: 64 00:10:14.984 Min: 64 00:10:14.984 Completion Queue Entry Size 00:10:14.984 Max: 16 00:10:14.984 Min: 16 00:10:14.984 Number of Namespaces: 32 00:10:14.984 Compare Command: Supported 00:10:14.984 Write Uncorrectable Command: Not Supported 00:10:14.984 Dataset Management Command: Supported 00:10:14.984 Write Zeroes Command: Supported 00:10:14.984 Set Features Save Field: Not Supported 00:10:14.984 Reservations: Not Supported 00:10:14.984 Timestamp: Not Supported 00:10:14.984 Copy: Supported 00:10:14.984 Volatile Write Cache: Present 00:10:14.984 Atomic Write Unit (Normal): 1 00:10:14.984 Atomic Write Unit (PFail): 1 00:10:14.984 Atomic Compare & Write Unit: 1 00:10:14.984 Fused Compare & Write: Supported 00:10:14.984 Scatter-Gather List 00:10:14.984 SGL Command Set: Supported (Dword aligned) 00:10:14.984 SGL Keyed: Not Supported 00:10:14.984 SGL Bit Bucket Descriptor: Not Supported 00:10:14.984 SGL Metadata Pointer: Not Supported 00:10:14.984 Oversized SGL: Not Supported 00:10:14.984 SGL Metadata Address: Not Supported 00:10:14.984 SGL Offset: Not Supported 00:10:14.984 Transport SGL Data Block: Not Supported 00:10:14.984 Replay Protected Memory Block: Not Supported 00:10:14.984 00:10:14.984 Firmware Slot Information 00:10:14.984 ========================= 00:10:14.984 Active slot: 1 00:10:14.984 Slot 1 Firmware Revision: 24.05 00:10:14.984 00:10:14.984 00:10:14.984 Commands Supported and Effects 00:10:14.984 ============================== 00:10:14.984 Admin Commands 00:10:14.984 -------------- 00:10:14.984 Get Log Page (02h): Supported 00:10:14.984 Identify (06h): Supported 00:10:14.984 Abort (08h): Supported 00:10:14.984 Set Features (09h): Supported 00:10:14.984 Get Features (0Ah): Supported 00:10:14.984 Asynchronous Event Request (0Ch): Supported 00:10:14.984 Keep Alive (18h): Supported 00:10:14.984 I/O Commands 00:10:14.984 ------------ 00:10:14.984 Flush (00h): Supported LBA-Change 00:10:14.984 Write (01h): Supported LBA-Change 00:10:14.984 Read (02h): Supported 00:10:14.984 Compare (05h): Supported 00:10:14.984 Write Zeroes (08h): Supported LBA-Change 00:10:14.984 Dataset Management (09h): Supported LBA-Change 00:10:14.984 Copy (19h): Supported LBA-Change 00:10:14.984 Unknown (79h): Supported LBA-Change 00:10:14.984 Unknown (7Ah): Supported 00:10:14.984 00:10:14.984 Error Log 00:10:14.984 ========= 00:10:14.984 00:10:14.984 Arbitration 00:10:14.984 =========== 00:10:14.984 Arbitration Burst: 1 00:10:14.984 00:10:14.984 Power Management 00:10:14.984 ================ 00:10:14.984 Number of Power States: 1 00:10:14.984 Current Power State: Power State #0 00:10:14.984 Power State #0: 00:10:14.984 Max Power: 0.00 W 00:10:14.984 Non-Operational State: Operational 00:10:14.984 Entry Latency: Not Reported 00:10:14.984 Exit Latency: Not Reported 00:10:14.984 Relative Read Throughput: 0 00:10:14.984 Relative Read Latency: 0 00:10:14.984 Relative Write Throughput: 0 00:10:14.984 Relative Write Latency: 0 00:10:14.984 Idle Power: Not Reported 00:10:14.984 Active Power: Not Reported 00:10:14.984 Non-Operational Permissive Mode: Not Supported 00:10:14.984 00:10:14.984 Health Information 00:10:14.984 ================== 00:10:14.984 Critical Warnings: 00:10:14.984 Available Spare Space: OK 00:10:14.984 Temperature: OK 00:10:14.984 Device Reliability: OK 00:10:14.984 Read Only: No 00:10:14.984 Volatile Memory Backup: OK 00:10:14.984 Current Temperature: 0 Kelvin (-2[2024-05-15 02:27:02.255128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:14.984 [2024-05-15 02:27:02.262953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:14.984 [2024-05-15 02:27:02.263002] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:14.984 [2024-05-15 02:27:02.263019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.984 [2024-05-15 02:27:02.263030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.984 [2024-05-15 02:27:02.263039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.984 [2024-05-15 02:27:02.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.984 [2024-05-15 02:27:02.263114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:14.984 [2024-05-15 02:27:02.263135] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:14.984 [2024-05-15 02:27:02.264120] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:14.984 [2024-05-15 02:27:02.264194] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:14.984 [2024-05-15 02:27:02.264210] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:14.985 [2024-05-15 02:27:02.265135] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:14.985 [2024-05-15 02:27:02.265160] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:14.985 [2024-05-15 02:27:02.265214] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:14.985 [2024-05-15 02:27:02.266415] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:14.985 73 Celsius) 00:10:14.985 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:14.985 Available Spare: 0% 00:10:14.985 Available Spare Threshold: 0% 00:10:14.985 Life Percentage Used: 0% 00:10:14.985 Data Units Read: 0 00:10:14.985 Data Units Written: 0 00:10:14.985 Host Read Commands: 0 00:10:14.985 Host Write Commands: 0 00:10:14.985 Controller Busy Time: 0 minutes 00:10:14.985 Power Cycles: 0 00:10:14.985 Power On Hours: 0 hours 00:10:14.985 Unsafe Shutdowns: 0 00:10:14.985 Unrecoverable Media Errors: 0 00:10:14.985 Lifetime Error Log Entries: 0 00:10:14.985 Warning Temperature Time: 0 minutes 00:10:14.985 Critical Temperature Time: 0 minutes 00:10:14.985 00:10:14.985 Number of Queues 00:10:14.985 ================ 00:10:14.985 Number of I/O Submission Queues: 127 00:10:14.985 Number of I/O Completion Queues: 127 00:10:14.985 00:10:14.985 Active Namespaces 00:10:14.985 ================= 00:10:14.985 Namespace ID:1 00:10:14.985 Error Recovery Timeout: Unlimited 00:10:14.985 Command Set Identifier: NVM (00h) 00:10:14.985 Deallocate: Supported 00:10:14.985 Deallocated/Unwritten Error: Not Supported 00:10:14.985 Deallocated Read Value: Unknown 00:10:14.985 Deallocate in Write Zeroes: Not Supported 00:10:14.985 Deallocated Guard Field: 0xFFFF 00:10:14.985 Flush: Supported 00:10:14.985 Reservation: Supported 00:10:14.985 Namespace Sharing Capabilities: Multiple Controllers 00:10:14.985 Size (in LBAs): 131072 (0GiB) 00:10:14.985 Capacity (in LBAs): 131072 (0GiB) 00:10:14.985 Utilization (in LBAs): 131072 (0GiB) 00:10:14.985 NGUID: 4358F44CE9AF40B899C186FD20406CE8 00:10:14.985 UUID: 4358f44c-e9af-40b8-99c1-86fd20406ce8 00:10:14.985 Thin Provisioning: Not Supported 00:10:14.985 Per-NS Atomic Units: Yes 00:10:14.985 Atomic Boundary Size (Normal): 0 00:10:14.985 Atomic Boundary Size (PFail): 0 00:10:14.985 Atomic Boundary Offset: 0 00:10:14.985 Maximum Single Source Range Length: 65535 00:10:14.985 Maximum Copy Length: 65535 00:10:14.985 Maximum Source Range Count: 1 00:10:14.985 NGUID/EUI64 Never Reused: No 00:10:14.985 Namespace Write Protected: No 00:10:14.985 Number of LBA Formats: 1 00:10:14.985 Current LBA Format: LBA Format #00 00:10:14.985 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:14.985 00:10:14.985 02:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:14.985 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.244 [2024-05-15 02:27:02.493720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:20.528 Initializing NVMe Controllers 00:10:20.528 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:20.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:20.528 Initialization complete. Launching workers. 00:10:20.528 ======================================================== 00:10:20.528 Latency(us) 00:10:20.528 Device Information : IOPS MiB/s Average min max 00:10:20.528 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34485.96 134.71 3711.27 1165.99 10640.32 00:10:20.528 ======================================================== 00:10:20.528 Total : 34485.96 134.71 3711.27 1165.99 10640.32 00:10:20.528 00:10:20.529 [2024-05-15 02:27:07.596322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:20.529 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:20.529 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.529 [2024-05-15 02:27:07.840018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:25.793 Initializing NVMe Controllers 00:10:25.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:25.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:25.793 Initialization complete. Launching workers. 00:10:25.793 ======================================================== 00:10:25.793 Latency(us) 00:10:25.793 Device Information : IOPS MiB/s Average min max 00:10:25.793 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31605.61 123.46 4048.92 1216.04 8225.37 00:10:25.793 ======================================================== 00:10:25.793 Total : 31605.61 123.46 4048.92 1216.04 8225.37 00:10:25.793 00:10:25.793 [2024-05-15 02:27:12.856985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:25.793 02:27:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:25.793 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.793 [2024-05-15 02:27:13.077706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.058 [2024-05-15 02:27:18.209079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.058 Initializing NVMe Controllers 00:10:31.058 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.058 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:31.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:31.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:31.058 Initialization complete. Launching workers. 00:10:31.058 Starting thread on core 2 00:10:31.058 Starting thread on core 3 00:10:31.058 Starting thread on core 1 00:10:31.058 02:27:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:31.058 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.317 [2024-05-15 02:27:18.519746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.602 [2024-05-15 02:27:21.575041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.602 Initializing NVMe Controllers 00:10:34.602 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.602 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:34.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:34.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:34.602 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:34.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:34.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:34.602 Initialization complete. Launching workers. 00:10:34.602 Starting thread on core 1 with urgent priority queue 00:10:34.602 Starting thread on core 2 with urgent priority queue 00:10:34.602 Starting thread on core 3 with urgent priority queue 00:10:34.602 Starting thread on core 0 with urgent priority queue 00:10:34.602 SPDK bdev Controller (SPDK2 ) core 0: 5864.33 IO/s 17.05 secs/100000 ios 00:10:34.602 SPDK bdev Controller (SPDK2 ) core 1: 6018.00 IO/s 16.62 secs/100000 ios 00:10:34.602 SPDK bdev Controller (SPDK2 ) core 2: 6243.67 IO/s 16.02 secs/100000 ios 00:10:34.602 SPDK bdev Controller (SPDK2 ) core 3: 6212.33 IO/s 16.10 secs/100000 ios 00:10:34.602 ======================================================== 00:10:34.602 00:10:34.602 02:27:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:34.602 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.602 [2024-05-15 02:27:21.897534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.602 Initializing NVMe Controllers 00:10:34.602 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.602 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.603 Namespace ID: 1 size: 0GB 00:10:34.603 Initialization complete. 00:10:34.603 INFO: using host memory buffer for IO 00:10:34.603 Hello world! 00:10:34.603 [2024-05-15 02:27:21.906698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.603 02:27:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:34.603 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.860 [2024-05-15 02:27:22.222716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.233 Initializing NVMe Controllers 00:10:36.233 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.233 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.233 Initialization complete. Launching workers. 00:10:36.233 submit (in ns) avg, min, max = 7762.9, 3497.8, 4015970.0 00:10:36.233 complete (in ns) avg, min, max = 25835.6, 2087.8, 5011415.6 00:10:36.233 00:10:36.233 Submit histogram 00:10:36.233 ================ 00:10:36.233 Range in us Cumulative Count 00:10:36.233 3.484 - 3.508: 0.0673% ( 9) 00:10:36.233 3.508 - 3.532: 0.3292% ( 35) 00:10:36.233 3.532 - 3.556: 1.6013% ( 170) 00:10:36.233 3.556 - 3.579: 3.6067% ( 268) 00:10:36.233 3.579 - 3.603: 9.3236% ( 764) 00:10:36.233 3.603 - 3.627: 16.0730% ( 902) 00:10:36.233 3.627 - 3.650: 27.1101% ( 1475) 00:10:36.233 3.650 - 3.674: 36.8677% ( 1304) 00:10:36.233 3.674 - 3.698: 45.7647% ( 1189) 00:10:36.233 3.698 - 3.721: 51.9755% ( 830) 00:10:36.233 3.721 - 3.745: 55.5747% ( 481) 00:10:36.233 3.745 - 3.769: 59.6603% ( 546) 00:10:36.233 3.769 - 3.793: 62.6833% ( 404) 00:10:36.233 3.793 - 3.816: 66.3274% ( 487) 00:10:36.233 3.816 - 3.840: 69.2083% ( 385) 00:10:36.233 3.840 - 3.864: 73.1892% ( 532) 00:10:36.233 3.864 - 3.887: 77.5741% ( 586) 00:10:36.233 3.887 - 3.911: 81.6672% ( 547) 00:10:36.233 3.911 - 3.935: 85.0045% ( 446) 00:10:36.233 3.935 - 3.959: 86.9949% ( 266) 00:10:36.233 3.959 - 3.982: 88.6486% ( 221) 00:10:36.233 3.982 - 4.006: 90.0404% ( 186) 00:10:36.233 4.006 - 4.030: 90.9982% ( 128) 00:10:36.233 4.030 - 4.053: 91.7614% ( 102) 00:10:36.233 4.053 - 4.077: 92.4648% ( 94) 00:10:36.233 4.077 - 4.101: 93.2206% ( 101) 00:10:36.233 4.101 - 4.124: 93.9838% ( 102) 00:10:36.233 4.124 - 4.148: 94.5974% ( 82) 00:10:36.233 4.148 - 4.172: 95.0763% ( 64) 00:10:36.233 4.172 - 4.196: 95.4505% ( 50) 00:10:36.233 4.196 - 4.219: 95.8246% ( 50) 00:10:36.233 4.219 - 4.243: 96.0865% ( 35) 00:10:36.233 4.243 - 4.267: 96.4008% ( 42) 00:10:36.233 4.267 - 4.290: 96.6253% ( 30) 00:10:36.233 4.290 - 4.314: 96.7749% ( 20) 00:10:36.233 4.314 - 4.338: 96.9171% ( 19) 00:10:36.233 4.338 - 4.361: 97.0368% ( 16) 00:10:36.233 4.361 - 4.385: 97.2014% ( 22) 00:10:36.233 4.385 - 4.409: 97.3436% ( 19) 00:10:36.233 4.409 - 4.433: 97.4110% ( 9) 00:10:36.233 4.433 - 4.456: 97.4334% ( 3) 00:10:36.233 4.456 - 4.480: 97.5007% ( 9) 00:10:36.233 4.480 - 4.504: 97.5232% ( 3) 00:10:36.233 4.504 - 4.527: 97.5681% ( 6) 00:10:36.233 4.527 - 4.551: 97.6055% ( 5) 00:10:36.233 4.551 - 4.575: 97.6280% ( 3) 00:10:36.233 4.575 - 4.599: 97.6354% ( 1) 00:10:36.233 4.599 - 4.622: 97.6504% ( 2) 00:10:36.233 4.622 - 4.646: 97.6729% ( 3) 00:10:36.233 4.646 - 4.670: 97.6803% ( 1) 00:10:36.233 4.693 - 4.717: 97.6953% ( 2) 00:10:36.233 4.764 - 4.788: 97.7103% ( 2) 00:10:36.233 4.788 - 4.812: 97.7177% ( 1) 00:10:36.233 4.836 - 4.859: 97.7626% ( 6) 00:10:36.233 4.859 - 4.883: 97.7701% ( 1) 00:10:36.233 4.883 - 4.907: 97.7851% ( 2) 00:10:36.233 4.907 - 4.930: 97.8150% ( 4) 00:10:36.233 4.930 - 4.954: 97.8674% ( 7) 00:10:36.233 4.954 - 4.978: 97.9048% ( 5) 00:10:36.233 4.978 - 5.001: 97.9422% ( 5) 00:10:36.233 5.001 - 5.025: 97.9572% ( 2) 00:10:36.233 5.025 - 5.049: 98.0021% ( 6) 00:10:36.233 5.049 - 5.073: 98.0395% ( 5) 00:10:36.233 5.073 - 5.096: 98.0694% ( 4) 00:10:36.233 5.096 - 5.120: 98.1368% ( 9) 00:10:36.233 5.120 - 5.144: 98.1667% ( 4) 00:10:36.233 5.144 - 5.167: 98.1966% ( 4) 00:10:36.233 5.167 - 5.191: 98.2415% ( 6) 00:10:36.233 5.191 - 5.215: 98.2565% ( 2) 00:10:36.233 5.215 - 5.239: 98.2715% ( 2) 00:10:36.233 5.239 - 5.262: 98.2790% ( 1) 00:10:36.233 5.262 - 5.286: 98.3014% ( 3) 00:10:36.233 5.286 - 5.310: 98.3239% ( 3) 00:10:36.233 5.310 - 5.333: 98.3538% ( 4) 00:10:36.233 5.333 - 5.357: 98.3613% ( 1) 00:10:36.233 5.357 - 5.381: 98.3912% ( 4) 00:10:36.233 5.381 - 5.404: 98.4062% ( 2) 00:10:36.233 5.404 - 5.428: 98.4211% ( 2) 00:10:36.233 5.428 - 5.452: 98.4361% ( 2) 00:10:36.233 5.452 - 5.476: 98.4436% ( 1) 00:10:36.233 5.476 - 5.499: 98.4511% ( 1) 00:10:36.233 5.499 - 5.523: 98.4585% ( 1) 00:10:36.233 5.523 - 5.547: 98.4660% ( 1) 00:10:36.233 5.547 - 5.570: 98.4735% ( 1) 00:10:36.233 5.570 - 5.594: 98.4885% ( 2) 00:10:36.233 5.594 - 5.618: 98.4960% ( 1) 00:10:36.233 5.618 - 5.641: 98.5034% ( 1) 00:10:36.233 5.641 - 5.665: 98.5109% ( 1) 00:10:36.233 5.807 - 5.831: 98.5184% ( 1) 00:10:36.233 5.855 - 5.879: 98.5334% ( 2) 00:10:36.233 6.305 - 6.353: 98.5409% ( 1) 00:10:36.233 6.400 - 6.447: 98.5558% ( 2) 00:10:36.234 6.542 - 6.590: 98.5708% ( 2) 00:10:36.234 6.732 - 6.779: 98.5783% ( 1) 00:10:36.234 6.827 - 6.874: 98.5858% ( 1) 00:10:36.234 6.874 - 6.921: 98.5932% ( 1) 00:10:36.234 7.159 - 7.206: 98.6082% ( 2) 00:10:36.234 7.206 - 7.253: 98.6306% ( 3) 00:10:36.234 7.301 - 7.348: 98.6381% ( 1) 00:10:36.234 7.490 - 7.538: 98.6456% ( 1) 00:10:36.234 7.680 - 7.727: 98.6531% ( 1) 00:10:36.234 7.727 - 7.775: 98.6606% ( 1) 00:10:36.234 7.775 - 7.822: 98.6681% ( 1) 00:10:36.234 7.917 - 7.964: 98.6980% ( 4) 00:10:36.234 8.012 - 8.059: 98.7055% ( 1) 00:10:36.234 8.059 - 8.107: 98.7204% ( 2) 00:10:36.234 8.154 - 8.201: 98.7354% ( 2) 00:10:36.234 8.201 - 8.249: 98.7429% ( 1) 00:10:36.234 8.249 - 8.296: 98.7504% ( 1) 00:10:36.234 8.391 - 8.439: 98.7579% ( 1) 00:10:36.234 8.533 - 8.581: 98.7653% ( 1) 00:10:36.234 8.723 - 8.770: 98.7728% ( 1) 00:10:36.234 8.770 - 8.818: 98.7803% ( 1) 00:10:36.234 9.197 - 9.244: 98.7878% ( 1) 00:10:36.234 9.244 - 9.292: 98.8028% ( 2) 00:10:36.234 9.481 - 9.529: 98.8102% ( 1) 00:10:36.234 9.576 - 9.624: 98.8177% ( 1) 00:10:36.234 9.671 - 9.719: 98.8252% ( 1) 00:10:36.234 10.240 - 10.287: 98.8327% ( 1) 00:10:36.234 10.335 - 10.382: 98.8402% ( 1) 00:10:36.234 10.524 - 10.572: 98.8477% ( 1) 00:10:36.234 10.761 - 10.809: 98.8551% ( 1) 00:10:36.234 11.141 - 11.188: 98.8626% ( 1) 00:10:36.234 11.520 - 11.567: 98.8701% ( 1) 00:10:36.234 13.843 - 13.938: 98.8776% ( 1) 00:10:36.234 13.938 - 14.033: 98.8851% ( 1) 00:10:36.234 17.067 - 17.161: 98.9000% ( 2) 00:10:36.234 17.161 - 17.256: 98.9075% ( 1) 00:10:36.234 17.256 - 17.351: 98.9150% ( 1) 00:10:36.234 17.351 - 17.446: 98.9524% ( 5) 00:10:36.234 17.446 - 17.541: 99.0048% ( 7) 00:10:36.234 17.541 - 17.636: 99.0422% ( 5) 00:10:36.234 17.636 - 17.730: 99.0721% ( 4) 00:10:36.234 17.730 - 17.825: 99.1245% ( 7) 00:10:36.234 17.825 - 17.920: 99.1395% ( 2) 00:10:36.234 17.920 - 18.015: 99.2143% ( 10) 00:10:36.234 18.015 - 18.110: 99.2891% ( 10) 00:10:36.234 18.110 - 18.204: 99.3714% ( 11) 00:10:36.234 18.204 - 18.299: 99.4163% ( 6) 00:10:36.234 18.299 - 18.394: 99.4837% ( 9) 00:10:36.234 18.394 - 18.489: 99.5810% ( 13) 00:10:36.234 18.489 - 18.584: 99.6558% ( 10) 00:10:36.234 18.584 - 18.679: 99.7456% ( 12) 00:10:36.234 18.679 - 18.773: 99.7606% ( 2) 00:10:36.234 18.773 - 18.868: 99.7905% ( 4) 00:10:36.234 18.868 - 18.963: 99.8054% ( 2) 00:10:36.234 18.963 - 19.058: 99.8204% ( 2) 00:10:36.234 19.247 - 19.342: 99.8279% ( 1) 00:10:36.234 19.342 - 19.437: 99.8429% ( 2) 00:10:36.234 19.532 - 19.627: 99.8503% ( 1) 00:10:36.234 20.006 - 20.101: 99.8578% ( 1) 00:10:36.234 21.523 - 21.618: 99.8653% ( 1) 00:10:36.234 23.419 - 23.514: 99.8728% ( 1) 00:10:36.234 25.410 - 25.600: 99.8803% ( 1) 00:10:36.234 25.790 - 25.979: 99.8878% ( 1) 00:10:36.234 28.255 - 28.444: 99.8952% ( 1) 00:10:36.234 32.427 - 32.616: 99.9027% ( 1) 00:10:36.234 3021.938 - 3034.074: 99.9102% ( 1) 00:10:36.234 3980.705 - 4004.978: 99.9701% ( 8) 00:10:36.234 4004.978 - 4029.250: 100.0000% ( 4) 00:10:36.234 00:10:36.234 Complete histogram 00:10:36.234 ================== 00:10:36.234 Range in us Cumulative Count 00:10:36.234 2.086 - 2.098: 2.6414% ( 353) 00:10:36.234 2.098 - 2.110: 17.7791% ( 2023) 00:10:36.234 2.110 - 2.121: 19.7995% ( 270) 00:10:36.234 2.121 - 2.133: 36.6582% ( 2253) 00:10:36.234 2.133 - 2.145: 58.6950% ( 2945) 00:10:36.234 2.145 - 2.157: 60.6555% ( 262) 00:10:36.234 2.157 - 2.169: 64.6663% ( 536) 00:10:36.234 2.169 - 2.181: 68.4600% ( 507) 00:10:36.234 2.181 - 2.193: 69.3505% ( 119) 00:10:36.234 2.193 - 2.204: 75.4789% ( 819) 00:10:36.234 2.204 - 2.216: 79.9835% ( 602) 00:10:36.234 2.216 - 2.228: 80.8665% ( 118) 00:10:36.234 2.228 - 2.240: 82.1386% ( 170) 00:10:36.234 2.240 - 2.252: 84.7127% ( 344) 00:10:36.234 2.252 - 2.264: 86.0670% ( 181) 00:10:36.234 2.264 - 2.276: 88.3343% ( 303) 00:10:36.234 2.276 - 2.287: 92.1281% ( 507) 00:10:36.234 2.287 - 2.299: 93.2580% ( 151) 00:10:36.234 2.299 - 2.311: 93.6172% ( 48) 00:10:36.234 2.311 - 2.323: 94.0362% ( 56) 00:10:36.234 2.323 - 2.335: 94.5450% ( 68) 00:10:36.234 2.335 - 2.347: 94.7845% ( 32) 00:10:36.234 2.347 - 2.359: 95.0389% ( 34) 00:10:36.234 2.359 - 2.370: 95.3233% ( 38) 00:10:36.234 2.370 - 2.382: 95.4056% ( 11) 00:10:36.234 2.382 - 2.394: 95.5103% ( 14) 00:10:36.234 2.394 - 2.406: 95.6899% ( 24) 00:10:36.234 2.406 - 2.418: 95.8246% ( 18) 00:10:36.234 2.418 - 2.430: 95.9219% ( 13) 00:10:36.234 2.430 - 2.441: 96.0192% ( 13) 00:10:36.234 2.441 - 2.453: 96.1314% ( 15) 00:10:36.234 2.453 - 2.465: 96.2511% ( 16) 00:10:36.234 2.465 - 2.477: 96.4606% ( 28) 00:10:36.234 2.477 - 2.489: 96.5804% ( 16) 00:10:36.234 2.489 - 2.501: 96.7525% ( 23) 00:10:36.234 2.501 - 2.513: 96.9021% ( 20) 00:10:36.234 2.513 - 2.524: 97.1042% ( 27) 00:10:36.234 2.524 - 2.536: 97.2837% ( 24) 00:10:36.234 2.536 - 2.548: 97.4783% ( 26) 00:10:36.234 2.548 - 2.560: 97.6130% ( 18) 00:10:36.234 2.560 - 2.572: 97.7552% ( 19) 00:10:36.234 2.572 - 2.584: 97.9497% ( 26) 00:10:36.234 2.584 - 2.596: 98.0545% ( 14) 00:10:36.234 2.596 - 2.607: 98.1218% ( 9) 00:10:36.234 2.607 - 2.619: 98.1892% ( 9) 00:10:36.234 2.619 - 2.631: 98.2715% ( 11) 00:10:36.234 2.631 - 2.643: 98.3388% ( 9) 00:10:36.234 2.643 - 2.655: 98.3613% ( 3) 00:10:36.234 2.655 - 2.667: 98.3837% ( 3) 00:10:36.234 2.667 - 2.679: 98.4211% ( 5) 00:10:36.234 2.679 - 2.690: 98.4585% ( 5) 00:10:36.234 2.702 - 2.714: 98.4735% ( 2) 00:10:36.234 2.714 - 2.726: 98.4810% ( 1) 00:10:36.234 2.773 - 2.785: 98.5034% ( 3) 00:10:36.234 2.833 - 2.844: 98.5109% ( 1) 00:10:36.234 2.844 - 2.856: 98.5184% ( 1) 00:10:36.234 3.437 - 3.461: 98.5259% ( 1) 00:10:36.234 3.484 - 3.508: 98.5334% ( 1) 00:10:36.234 3.508 - 3.532: 98.5483% ( 2) 00:10:36.234 3.532 - 3.556: 98.5708% ( 3) 00:10:36.234 3.556 - 3.579: 98.5932% ( 3) 00:10:36.234 3.603 - 3.627: 98.6157% ( 3) 00:10:36.234 3.627 - 3.650: 98.6232% ( 1) 00:10:36.234 3.650 - 3.674: 98.6306% ( 1) 00:10:36.234 3.674 - 3.698: 98.6531% ( 3) 00:10:36.234 3.698 - 3.721: 98.6606% ( 1) 00:10:36.234 3.721 - 3.745: 98.6755% ( 2) 00:10:36.234 3.769 - 3.793: 98.6830% ( 1) 00:10:36.234 3.793 - 3.816: 98.6905% ( 1) 00:10:36.234 3.840 - 3.864: 98.6980% ( 1) 00:10:36.234 3.864 - 3.887: 98.7055% ( 1) 00:10:36.234 3.887 - 3.911: 98.7279% ( 3) 00:10:36.234 4.077 - 4.101: 98.7354% ( 1) 00:10:36.234 5.215 - 5.239: 98.7429% ( 1) 00:10:36.234 5.357 - 5.381: 98.7579% ( 2) 00:10:36.234 5.594 - 5.618: 98.7653% ( 1) 00:10:36.234 5.784 - 5.807: 98.7728% ( 1) 00:10:36.234 5.879 - 5.902: 98.7803% ( 1) 00:10:36.234 5.950 - 5.973: 98.7953% ( 2) 00:10:36.234 5.973 - 5.997: 98.8102% ( 2) 00:10:36.234 5.997 - 6.021: 98.8177% ( 1) 00:10:36.234 6.068 - 6.116: 98.8252% ( 1) 00:10:36.234 6.116 - 6.163: 98.8327% ( 1) 00:10:36.234 6.163 - 6.210: 98.8402% ( 1) 00:10:36.234 6.210 - 6.258: 98.8477% ( 1) 00:10:36.234 6.400 - 6.447: 98.8551% ( 1) 00:10:36.234 6.590 - 6.637: 98.8626% ( 1) 00:10:36.234 6.921 - 6.969: 98.8701% ( 1) 00:10:36.234 7.111 - 7.159: 98.8776% ( 1) 00:10:36.234 7.253 - 7.301: 98.8851% ( 1) 00:10:36.234 7.585 - 7.633: 98.8925% ( 1) 00:10:36.234 7.870 - 7.917: 9[2024-05-15 02:27:23.323724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.234 8.9000% ( 1) 00:10:36.234 8.059 - 8.107: 98.9075% ( 1) 00:10:36.234 10.382 - 10.430: 98.9150% ( 1) 00:10:36.234 15.550 - 15.644: 98.9300% ( 2) 00:10:36.234 15.644 - 15.739: 98.9449% ( 2) 00:10:36.234 15.834 - 15.929: 98.9749% ( 4) 00:10:36.234 15.929 - 16.024: 98.9973% ( 3) 00:10:36.234 16.024 - 16.119: 99.0198% ( 3) 00:10:36.234 16.119 - 16.213: 99.0347% ( 2) 00:10:36.234 16.213 - 16.308: 99.0572% ( 3) 00:10:36.234 16.308 - 16.403: 99.1170% ( 8) 00:10:36.234 16.403 - 16.498: 99.1320% ( 2) 00:10:36.234 16.498 - 16.593: 99.1619% ( 4) 00:10:36.234 16.593 - 16.687: 99.2068% ( 6) 00:10:36.234 16.782 - 16.877: 99.2667% ( 8) 00:10:36.234 16.877 - 16.972: 99.2966% ( 4) 00:10:36.234 17.067 - 17.161: 99.3116% ( 2) 00:10:36.234 17.161 - 17.256: 99.3340% ( 3) 00:10:36.234 17.256 - 17.351: 99.3415% ( 1) 00:10:36.234 17.351 - 17.446: 99.3565% ( 2) 00:10:36.234 17.446 - 17.541: 99.3714% ( 2) 00:10:36.234 17.541 - 17.636: 99.3789% ( 1) 00:10:36.234 17.636 - 17.730: 99.3864% ( 1) 00:10:36.234 17.920 - 18.015: 99.3939% ( 1) 00:10:36.234 18.110 - 18.204: 99.4014% ( 1) 00:10:36.234 18.584 - 18.679: 99.4089% ( 1) 00:10:36.234 2087.443 - 2099.579: 99.4163% ( 1) 00:10:36.234 3106.892 - 3131.164: 99.4238% ( 1) 00:10:36.234 3980.705 - 4004.978: 99.9027% ( 64) 00:10:36.234 4004.978 - 4029.250: 99.9850% ( 11) 00:10:36.234 4975.881 - 5000.154: 99.9925% ( 1) 00:10:36.234 5000.154 - 5024.427: 100.0000% ( 1) 00:10:36.234 00:10:36.234 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:36.234 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:36.234 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:36.234 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:36.234 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:36.234 [ 00:10:36.234 { 00:10:36.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:36.234 "subtype": "Discovery", 00:10:36.234 "listen_addresses": [], 00:10:36.234 "allow_any_host": true, 00:10:36.234 "hosts": [] 00:10:36.234 }, 00:10:36.234 { 00:10:36.234 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:36.234 "subtype": "NVMe", 00:10:36.234 "listen_addresses": [ 00:10:36.234 { 00:10:36.234 "trtype": "VFIOUSER", 00:10:36.234 "adrfam": "IPv4", 00:10:36.234 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:36.234 "trsvcid": "0" 00:10:36.234 } 00:10:36.234 ], 00:10:36.234 "allow_any_host": true, 00:10:36.234 "hosts": [], 00:10:36.234 "serial_number": "SPDK1", 00:10:36.234 "model_number": "SPDK bdev Controller", 00:10:36.234 "max_namespaces": 32, 00:10:36.234 "min_cntlid": 1, 00:10:36.234 "max_cntlid": 65519, 00:10:36.234 "namespaces": [ 00:10:36.234 { 00:10:36.234 "nsid": 1, 00:10:36.234 "bdev_name": "Malloc1", 00:10:36.234 "name": "Malloc1", 00:10:36.234 "nguid": "65EF873AFEB24B5E98B739B1C2B43901", 00:10:36.234 "uuid": "65ef873a-feb2-4b5e-98b7-39b1c2b43901" 00:10:36.234 }, 00:10:36.234 { 00:10:36.234 "nsid": 2, 00:10:36.234 "bdev_name": "Malloc3", 00:10:36.234 "name": "Malloc3", 00:10:36.234 "nguid": "47FC2F9C36D34A5DB1E2916C42BBB880", 00:10:36.234 "uuid": "47fc2f9c-36d3-4a5d-b1e2-916c42bbb880" 00:10:36.234 } 00:10:36.234 ] 00:10:36.234 }, 00:10:36.234 { 00:10:36.234 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:36.234 "subtype": "NVMe", 00:10:36.234 "listen_addresses": [ 00:10:36.234 { 00:10:36.234 "trtype": "VFIOUSER", 00:10:36.234 "adrfam": "IPv4", 00:10:36.234 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:36.234 "trsvcid": "0" 00:10:36.234 } 00:10:36.234 ], 00:10:36.234 "allow_any_host": true, 00:10:36.234 "hosts": [], 00:10:36.234 "serial_number": "SPDK2", 00:10:36.234 "model_number": "SPDK bdev Controller", 00:10:36.234 "max_namespaces": 32, 00:10:36.234 "min_cntlid": 1, 00:10:36.234 "max_cntlid": 65519, 00:10:36.234 "namespaces": [ 00:10:36.234 { 00:10:36.234 "nsid": 1, 00:10:36.234 "bdev_name": "Malloc2", 00:10:36.234 "name": "Malloc2", 00:10:36.234 "nguid": "4358F44CE9AF40B899C186FD20406CE8", 00:10:36.234 "uuid": "4358f44c-e9af-40b8-99c1-86fd20406ce8" 00:10:36.234 } 00:10:36.234 ] 00:10:36.234 } 00:10:36.234 ] 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2257616 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:36.492 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:36.492 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.492 [2024-05-15 02:27:23.822498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.749 Malloc4 00:10:36.749 02:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:36.749 [2024-05-15 02:27:24.143897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.749 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:37.007 Asynchronous Event Request test 00:10:37.007 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.007 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.007 Registering asynchronous event callbacks... 00:10:37.007 Starting namespace attribute notice tests for all controllers... 00:10:37.007 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:37.007 aer_cb - Changed Namespace 00:10:37.007 Cleaning up... 00:10:37.007 [ 00:10:37.007 { 00:10:37.007 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.007 "subtype": "Discovery", 00:10:37.007 "listen_addresses": [], 00:10:37.007 "allow_any_host": true, 00:10:37.007 "hosts": [] 00:10:37.007 }, 00:10:37.007 { 00:10:37.007 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:37.007 "subtype": "NVMe", 00:10:37.007 "listen_addresses": [ 00:10:37.007 { 00:10:37.007 "trtype": "VFIOUSER", 00:10:37.007 "adrfam": "IPv4", 00:10:37.007 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:37.007 "trsvcid": "0" 00:10:37.007 } 00:10:37.007 ], 00:10:37.007 "allow_any_host": true, 00:10:37.007 "hosts": [], 00:10:37.007 "serial_number": "SPDK1", 00:10:37.007 "model_number": "SPDK bdev Controller", 00:10:37.007 "max_namespaces": 32, 00:10:37.007 "min_cntlid": 1, 00:10:37.007 "max_cntlid": 65519, 00:10:37.007 "namespaces": [ 00:10:37.007 { 00:10:37.007 "nsid": 1, 00:10:37.007 "bdev_name": "Malloc1", 00:10:37.007 "name": "Malloc1", 00:10:37.007 "nguid": "65EF873AFEB24B5E98B739B1C2B43901", 00:10:37.007 "uuid": "65ef873a-feb2-4b5e-98b7-39b1c2b43901" 00:10:37.007 }, 00:10:37.007 { 00:10:37.007 "nsid": 2, 00:10:37.007 "bdev_name": "Malloc3", 00:10:37.007 "name": "Malloc3", 00:10:37.007 "nguid": "47FC2F9C36D34A5DB1E2916C42BBB880", 00:10:37.007 "uuid": "47fc2f9c-36d3-4a5d-b1e2-916c42bbb880" 00:10:37.007 } 00:10:37.007 ] 00:10:37.007 }, 00:10:37.007 { 00:10:37.007 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:37.007 "subtype": "NVMe", 00:10:37.007 "listen_addresses": [ 00:10:37.007 { 00:10:37.007 "trtype": "VFIOUSER", 00:10:37.007 "adrfam": "IPv4", 00:10:37.007 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:37.007 "trsvcid": "0" 00:10:37.007 } 00:10:37.007 ], 00:10:37.007 "allow_any_host": true, 00:10:37.007 "hosts": [], 00:10:37.007 "serial_number": "SPDK2", 00:10:37.007 "model_number": "SPDK bdev Controller", 00:10:37.007 "max_namespaces": 32, 00:10:37.007 "min_cntlid": 1, 00:10:37.007 "max_cntlid": 65519, 00:10:37.007 "namespaces": [ 00:10:37.007 { 00:10:37.007 "nsid": 1, 00:10:37.007 "bdev_name": "Malloc2", 00:10:37.007 "name": "Malloc2", 00:10:37.007 "nguid": "4358F44CE9AF40B899C186FD20406CE8", 00:10:37.007 "uuid": "4358f44c-e9af-40b8-99c1-86fd20406ce8" 00:10:37.007 }, 00:10:37.007 { 00:10:37.007 "nsid": 2, 00:10:37.007 "bdev_name": "Malloc4", 00:10:37.007 "name": "Malloc4", 00:10:37.007 "nguid": "3D4F5751B0004CB4A92B8701CD8EF6E7", 00:10:37.007 "uuid": "3d4f5751-b000-4cb4-a92b-8701cd8ef6e7" 00:10:37.007 } 00:10:37.007 ] 00:10:37.007 } 00:10:37.007 ] 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2257616 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2251374 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2251374 ']' 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2251374 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:37.007 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2251374 00:10:37.266 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:37.266 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:37.266 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2251374' 00:10:37.266 killing process with pid 2251374 00:10:37.266 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2251374 00:10:37.266 [2024-05-15 02:27:24.433754] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:37.266 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2251374 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2257756 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2257756' 00:10:37.525 Process pid: 2257756 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2257756 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2257756 ']' 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:37.525 02:27:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:37.525 [2024-05-15 02:27:24.856013] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:37.525 [2024-05-15 02:27:24.857027] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:10:37.525 [2024-05-15 02:27:24.857095] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.525 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.525 [2024-05-15 02:27:24.929255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.784 [2024-05-15 02:27:25.046170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.784 [2024-05-15 02:27:25.046234] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.784 [2024-05-15 02:27:25.046250] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.784 [2024-05-15 02:27:25.046263] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.784 [2024-05-15 02:27:25.046275] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.784 [2024-05-15 02:27:25.046370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.784 [2024-05-15 02:27:25.046447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.784 [2024-05-15 02:27:25.047953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.784 [2024-05-15 02:27:25.047958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.784 [2024-05-15 02:27:25.167275] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:37.784 [2024-05-15 02:27:25.167505] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:37.784 [2024-05-15 02:27:25.167761] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:37.784 [2024-05-15 02:27:25.168440] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:37.784 [2024-05-15 02:27:25.168689] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:38.718 02:27:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:38.718 02:27:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:38.718 02:27:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:39.734 02:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:39.734 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:39.734 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:39.734 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:39.734 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:39.734 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:39.993 Malloc1 00:10:39.993 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:40.558 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:40.558 02:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:40.816 [2024-05-15 02:27:28.192603] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:40.816 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.816 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:40.816 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:41.381 Malloc2 00:10:41.381 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:41.639 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:41.897 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2257756 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2257756 ']' 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2257756 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2257756 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:42.154 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2257756' 00:10:42.154 killing process with pid 2257756 00:10:42.155 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2257756 00:10:42.155 [2024-05-15 02:27:29.380966] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:42.155 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2257756 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:42.412 00:10:42.412 real 0m53.600s 00:10:42.412 user 3m31.035s 00:10:42.412 sys 0m4.884s 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:42.412 ************************************ 00:10:42.412 END TEST nvmf_vfio_user 00:10:42.412 ************************************ 00:10:42.412 02:27:29 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.412 02:27:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:42.412 02:27:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.412 02:27:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.412 ************************************ 00:10:42.412 START TEST nvmf_vfio_user_nvme_compliance 00:10:42.412 ************************************ 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.412 * Looking for test storage... 00:10:42.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.412 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.413 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2258371 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2258371' 00:10:42.671 Process pid: 2258371 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2258371 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 2258371 ']' 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:42.671 02:27:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.671 [2024-05-15 02:27:29.872831] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:10:42.671 [2024-05-15 02:27:29.872903] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.671 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.671 [2024-05-15 02:27:29.941446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.671 [2024-05-15 02:27:30.057283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.671 [2024-05-15 02:27:30.057344] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.671 [2024-05-15 02:27:30.057379] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.671 [2024-05-15 02:27:30.057392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.671 [2024-05-15 02:27:30.057402] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.671 [2024-05-15 02:27:30.057465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.671 [2024-05-15 02:27:30.057526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.671 [2024-05-15 02:27:30.057529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.929 02:27:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:42.929 02:27:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:10:42.929 02:27:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.861 malloc0 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.861 [2024-05-15 02:27:31.251311] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.861 02:27:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:44.118 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.118 00:10:44.118 00:10:44.118 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.118 http://cunit.sourceforge.net/ 00:10:44.118 00:10:44.118 00:10:44.118 Suite: nvme_compliance 00:10:44.118 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 02:27:31.428119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.118 [2024-05-15 02:27:31.429602] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:44.118 [2024-05-15 02:27:31.429627] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:44.119 [2024-05-15 02:27:31.429654] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:44.119 [2024-05-15 02:27:31.431138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.119 passed 00:10:44.119 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 02:27:31.516772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.119 [2024-05-15 02:27:31.522807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.377 passed 00:10:44.377 Test: admin_identify_ns ...[2024-05-15 02:27:31.609403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.377 [2024-05-15 02:27:31.668945] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:44.377 [2024-05-15 02:27:31.676944] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:44.377 [2024-05-15 02:27:31.698067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.377 passed 00:10:44.377 Test: admin_get_features_mandatory_features ...[2024-05-15 02:27:31.780765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.377 [2024-05-15 02:27:31.783782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.634 passed 00:10:44.634 Test: admin_get_features_optional_features ...[2024-05-15 02:27:31.868390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.634 [2024-05-15 02:27:31.871409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.634 passed 00:10:44.634 Test: admin_set_features_number_of_queues ...[2024-05-15 02:27:31.955622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.891 [2024-05-15 02:27:32.060051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.891 passed 00:10:44.891 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 02:27:32.144025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.891 [2024-05-15 02:27:32.147032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.891 passed 00:10:44.891 Test: admin_get_log_page_with_lpo ...[2024-05-15 02:27:32.231583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.891 [2024-05-15 02:27:32.298948] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:45.149 [2024-05-15 02:27:32.312028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.149 passed 00:10:45.149 Test: fabric_property_get ...[2024-05-15 02:27:32.396782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.149 [2024-05-15 02:27:32.398062] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:45.149 [2024-05-15 02:27:32.399807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.149 passed 00:10:45.149 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 02:27:32.480346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.149 [2024-05-15 02:27:32.481609] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:45.149 [2024-05-15 02:27:32.483364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.149 passed 00:10:45.407 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 02:27:32.569549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.407 [2024-05-15 02:27:32.653942] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.407 [2024-05-15 02:27:32.672956] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.407 [2024-05-15 02:27:32.678063] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.407 passed 00:10:45.407 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 02:27:32.761736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.407 [2024-05-15 02:27:32.763025] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:45.407 [2024-05-15 02:27:32.764760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.407 passed 00:10:45.665 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 02:27:32.848141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.665 [2024-05-15 02:27:32.927943] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:45.665 [2024-05-15 02:27:32.951939] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.665 [2024-05-15 02:27:32.957061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.665 passed 00:10:45.665 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 02:27:33.037734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.665 [2024-05-15 02:27:33.039026] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:45.665 [2024-05-15 02:27:33.039080] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:45.665 [2024-05-15 02:27:33.042767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.665 passed 00:10:45.923 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 02:27:33.127670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.923 [2024-05-15 02:27:33.218941] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:45.923 [2024-05-15 02:27:33.226943] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:45.923 [2024-05-15 02:27:33.234941] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:45.923 [2024-05-15 02:27:33.242943] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:45.923 [2024-05-15 02:27:33.272042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.923 passed 00:10:46.181 Test: admin_create_io_sq_verify_pc ...[2024-05-15 02:27:33.356819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:46.181 [2024-05-15 02:27:33.371954] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:46.181 [2024-05-15 02:27:33.388234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:46.181 passed 00:10:46.181 Test: admin_create_io_qp_max_qps ...[2024-05-15 02:27:33.471803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.554 [2024-05-15 02:27:34.570949] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:47.554 [2024-05-15 02:27:34.967287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.811 passed 00:10:47.812 Test: admin_create_io_sq_shared_cq ...[2024-05-15 02:27:35.053488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.812 [2024-05-15 02:27:35.184944] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:47.812 [2024-05-15 02:27:35.222028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:48.070 passed 00:10:48.070 00:10:48.070 Run Summary: Type Total Ran Passed Failed Inactive 00:10:48.070 suites 1 1 n/a 0 0 00:10:48.070 tests 18 18 18 0 0 00:10:48.070 asserts 360 360 360 0 n/a 00:10:48.070 00:10:48.070 Elapsed time = 1.573 seconds 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2258371 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 2258371 ']' 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 2258371 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2258371 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2258371' 00:10:48.070 killing process with pid 2258371 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 2258371 00:10:48.070 [2024-05-15 02:27:35.300140] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:48.070 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 2258371 00:10:48.328 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:48.328 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:48.328 00:10:48.328 real 0m5.845s 00:10:48.328 user 0m16.275s 00:10:48.328 sys 0m0.604s 00:10:48.328 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:48.328 02:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:48.328 ************************************ 00:10:48.328 END TEST nvmf_vfio_user_nvme_compliance 00:10:48.328 ************************************ 00:10:48.328 02:27:35 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:48.328 02:27:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:48.328 02:27:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.328 02:27:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.328 ************************************ 00:10:48.328 START TEST nvmf_vfio_user_fuzz 00:10:48.328 ************************************ 00:10:48.328 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:48.328 * Looking for test storage... 00:10:48.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2259094 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2259094' 00:10:48.329 Process pid: 2259094 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2259094 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2259094 ']' 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:48.329 02:27:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.895 02:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:48.895 02:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:10:48.895 02:27:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.830 malloc0 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:49.830 02:27:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:21.941 Fuzzing completed. Shutting down the fuzz application 00:11:21.941 00:11:21.941 Dumping successful admin opcodes: 00:11:21.941 8, 9, 10, 24, 00:11:21.941 Dumping successful io opcodes: 00:11:21.941 0, 00:11:21.941 NS: 0x200003a1ef00 I/O qp, Total commands completed: 654284, total successful commands: 2543, random_seed: 3042516288 00:11:21.941 NS: 0x200003a1ef00 admin qp, Total commands completed: 84090, total successful commands: 668, random_seed: 3417410560 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2259094 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2259094 ']' 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 2259094 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2259094 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2259094' 00:11:21.941 killing process with pid 2259094 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 2259094 00:11:21.941 02:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 2259094 00:11:21.941 02:28:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:21.941 02:28:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:21.941 00:11:21.941 real 0m32.407s 00:11:21.941 user 0m33.255s 00:11:21.941 sys 0m25.851s 00:11:21.941 02:28:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.941 02:28:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.941 ************************************ 00:11:21.941 END TEST nvmf_vfio_user_fuzz 00:11:21.941 ************************************ 00:11:21.941 02:28:08 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.941 02:28:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:21.941 02:28:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.941 02:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.941 ************************************ 00:11:21.942 START TEST nvmf_host_management 00:11:21.942 ************************************ 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.942 * Looking for test storage... 00:11:21.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.942 02:28:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:23.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:23.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.333 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:23.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:23.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.334 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:11:23.593 00:11:23.593 --- 10.0.0.2 ping statistics --- 00:11:23.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.593 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:11:23.593 00:11:23.593 --- 10.0.0.1 ping statistics --- 00:11:23.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.593 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2264835 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2264835 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2264835 ']' 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.593 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:23.594 02:28:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.594 [2024-05-15 02:28:10.889878] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:11:23.594 [2024-05-15 02:28:10.889986] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.594 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.594 [2024-05-15 02:28:10.974528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.852 [2024-05-15 02:28:11.097938] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.852 [2024-05-15 02:28:11.098002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.852 [2024-05-15 02:28:11.098017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.852 [2024-05-15 02:28:11.098029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.852 [2024-05-15 02:28:11.098040] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.852 [2024-05-15 02:28:11.098094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.852 [2024-05-15 02:28:11.098146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.852 [2024-05-15 02:28:11.098149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.852 [2024-05-15 02:28:11.098122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.785 02:28:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 [2024-05-15 02:28:11.911034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 Malloc0 00:11:24.786 [2024-05-15 02:28:11.972435] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:24.786 [2024-05-15 02:28:11.972744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.786 02:28:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2265011 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2265011 /var/tmp/bdevperf.sock 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2265011 ']' 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:24.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:24.786 { 00:11:24.786 "params": { 00:11:24.786 "name": "Nvme$subsystem", 00:11:24.786 "trtype": "$TEST_TRANSPORT", 00:11:24.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.786 "adrfam": "ipv4", 00:11:24.786 "trsvcid": "$NVMF_PORT", 00:11:24.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.786 "hdgst": ${hdgst:-false}, 00:11:24.786 "ddgst": ${ddgst:-false} 00:11:24.786 }, 00:11:24.786 "method": "bdev_nvme_attach_controller" 00:11:24.786 } 00:11:24.786 EOF 00:11:24.786 )") 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:24.786 02:28:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:24.786 "params": { 00:11:24.786 "name": "Nvme0", 00:11:24.786 "trtype": "tcp", 00:11:24.786 "traddr": "10.0.0.2", 00:11:24.786 "adrfam": "ipv4", 00:11:24.786 "trsvcid": "4420", 00:11:24.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:24.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:24.786 "hdgst": false, 00:11:24.786 "ddgst": false 00:11:24.786 }, 00:11:24.786 "method": "bdev_nvme_attach_controller" 00:11:24.786 }' 00:11:24.786 [2024-05-15 02:28:12.052420] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:11:24.786 [2024-05-15 02:28:12.052493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265011 ] 00:11:24.786 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.786 [2024-05-15 02:28:12.125081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.044 [2024-05-15 02:28:12.237399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.302 Running I/O for 10 seconds... 00:11:25.869 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:25.869 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:25.869 02:28:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:25.869 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.869 02:28:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.869 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.869 [2024-05-15 02:28:13.053811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.869 [2024-05-15 02:28:13.053867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.053885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.869 [2024-05-15 02:28:13.053899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.053914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.869 [2024-05-15 02:28:13.053927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.053948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.869 [2024-05-15 02:28:13.053962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.053975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb990 is same with the state(5) to be set 00:11:25.869 [2024-05-15 02:28:13.054832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.869 [2024-05-15 02:28:13.054858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.054883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.869 [2024-05-15 02:28:13.054899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.054938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.869 [2024-05-15 02:28:13.054955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.054971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.869 [2024-05-15 02:28:13.054987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.055013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.869 [2024-05-15 02:28:13.055027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.869 [2024-05-15 02:28:13.055043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.869 [2024-05-15 02:28:13.055058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.055977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.055992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.056022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.056037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.056051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.056066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.056081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.056096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.056111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.870 [2024-05-15 02:28:13.056125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.870 [2024-05-15 02:28:13.056140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.871 [2024-05-15 02:28:13.056689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:25.871 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.871 [2024-05-15 02:28:13.056879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.871 [2024-05-15 02:28:13.056987] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x110cf20 was disconnected and fr 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.871 eed. reset controller. 00:11:25.871 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.871 [2024-05-15 02:28:13.058121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:25.871 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:25.871 00:11:25.871 Latency(us) 00:11:25.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.871 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:25.871 Job: Nvme0n1 ended in about 0.51 seconds with error 00:11:25.871 Verification LBA range: start 0x0 length 0x400 00:11:25.871 Nvme0n1 : 0.51 1130.07 70.63 125.56 0.00 49832.85 2548.62 44273.21 00:11:25.871 =================================================================================================================== 00:11:25.871 Total : 1130.07 70.63 125.56 0.00 49832.85 2548.62 44273.21 00:11:25.871 [2024-05-15 02:28:13.059998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:25.871 [2024-05-15 02:28:13.060032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb990 (9): Bad file descriptor 00:11:25.871 02:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.871 02:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:25.871 [2024-05-15 02:28:13.111169] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2265011 00:11:26.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2265011) - No such process 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:26.806 { 00:11:26.806 "params": { 00:11:26.806 "name": "Nvme$subsystem", 00:11:26.806 "trtype": "$TEST_TRANSPORT", 00:11:26.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.806 "adrfam": "ipv4", 00:11:26.806 "trsvcid": "$NVMF_PORT", 00:11:26.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.806 "hdgst": ${hdgst:-false}, 00:11:26.806 "ddgst": ${ddgst:-false} 00:11:26.806 }, 00:11:26.806 "method": "bdev_nvme_attach_controller" 00:11:26.806 } 00:11:26.806 EOF 00:11:26.806 )") 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:26.806 02:28:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:26.806 "params": { 00:11:26.806 "name": "Nvme0", 00:11:26.806 "trtype": "tcp", 00:11:26.806 "traddr": "10.0.0.2", 00:11:26.806 "adrfam": "ipv4", 00:11:26.806 "trsvcid": "4420", 00:11:26.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:26.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:26.806 "hdgst": false, 00:11:26.806 "ddgst": false 00:11:26.806 }, 00:11:26.806 "method": "bdev_nvme_attach_controller" 00:11:26.806 }' 00:11:26.806 [2024-05-15 02:28:14.111549] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:11:26.807 [2024-05-15 02:28:14.111630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265294 ] 00:11:26.807 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.807 [2024-05-15 02:28:14.182330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.063 [2024-05-15 02:28:14.292732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.321 Running I/O for 1 seconds... 00:11:28.252 00:11:28.252 Latency(us) 00:11:28.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:28.252 Verification LBA range: start 0x0 length 0x400 00:11:28.252 Nvme0n1 : 1.02 1003.56 62.72 0.00 0.00 62908.37 1529.17 52428.80 00:11:28.252 =================================================================================================================== 00:11:28.252 Total : 1003.56 62.72 0.00 0.00 62908.37 1529.17 52428.80 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.509 rmmod nvme_tcp 00:11:28.509 rmmod nvme_fabrics 00:11:28.509 rmmod nvme_keyring 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2264835 ']' 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2264835 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2264835 ']' 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2264835 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2264835 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2264835' 00:11:28.509 killing process with pid 2264835 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2264835 00:11:28.509 [2024-05-15 02:28:15.887624] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:28.509 02:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2264835 00:11:28.767 [2024-05-15 02:28:16.166127] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.027 02:28:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.926 02:28:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.926 02:28:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:30.926 00:11:30.926 real 0m10.120s 00:11:30.926 user 0m24.084s 00:11:30.926 sys 0m3.051s 00:11:30.926 02:28:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:30.926 02:28:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.926 ************************************ 00:11:30.926 END TEST nvmf_host_management 00:11:30.926 ************************************ 00:11:30.926 02:28:18 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:30.926 02:28:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:30.926 02:28:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:30.926 02:28:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.926 ************************************ 00:11:30.926 START TEST nvmf_lvol 00:11:30.926 ************************************ 00:11:30.926 02:28:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:31.184 * Looking for test storage... 00:11:31.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.184 02:28:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.716 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:11:33.717 00:11:33.717 --- 10.0.0.2 ping statistics --- 00:11:33.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.717 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:33.717 00:11:33.717 --- 10.0.0.1 ping statistics --- 00:11:33.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.717 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2267783 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2267783 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2267783 ']' 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:33.717 02:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.717 [2024-05-15 02:28:21.029481] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:11:33.717 [2024-05-15 02:28:21.029559] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.717 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.717 [2024-05-15 02:28:21.104883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.975 [2024-05-15 02:28:21.213347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.975 [2024-05-15 02:28:21.213398] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.975 [2024-05-15 02:28:21.213426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.975 [2024-05-15 02:28:21.213438] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.975 [2024-05-15 02:28:21.213448] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.975 [2024-05-15 02:28:21.213577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.975 [2024-05-15 02:28:21.213654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.975 [2024-05-15 02:28:21.213657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.975 02:28:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:34.232 [2024-05-15 02:28:21.560173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.232 02:28:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.491 02:28:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:34.491 02:28:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.749 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:34.749 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:35.007 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:35.300 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=54129676-2dd1-462f-9360-f1f7641c855a 00:11:35.300 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54129676-2dd1-462f-9360-f1f7641c855a lvol 20 00:11:35.558 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=067418c6-8d95-41c8-9fd5-cfda55c410fc 00:11:35.558 02:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:35.816 02:28:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 067418c6-8d95-41c8-9fd5-cfda55c410fc 00:11:36.074 02:28:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:36.331 [2024-05-15 02:28:23.636260] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:36.331 [2024-05-15 02:28:23.636537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.331 02:28:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:36.588 02:28:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2268210 00:11:36.588 02:28:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:36.588 02:28:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:36.588 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.522 02:28:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 067418c6-8d95-41c8-9fd5-cfda55c410fc MY_SNAPSHOT 00:11:38.089 02:28:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=89073346-d3da-4fea-8952-98e3222979f9 00:11:38.089 02:28:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 067418c6-8d95-41c8-9fd5-cfda55c410fc 30 00:11:38.089 02:28:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 89073346-d3da-4fea-8952-98e3222979f9 MY_CLONE 00:11:38.347 02:28:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c9109e71-c894-4ae4-9fcf-e64ef4522b72 00:11:38.347 02:28:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c9109e71-c894-4ae4-9fcf-e64ef4522b72 00:11:38.912 02:28:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2268210 00:11:47.021 Initializing NVMe Controllers 00:11:47.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:47.021 Controller IO queue size 128, less than required. 00:11:47.021 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:47.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:47.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:47.021 Initialization complete. Launching workers. 00:11:47.021 ======================================================== 00:11:47.021 Latency(us) 00:11:47.021 Device Information : IOPS MiB/s Average min max 00:11:47.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11002.10 42.98 11640.32 2032.42 80819.41 00:11:47.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10870.40 42.46 11780.41 2124.11 79590.04 00:11:47.021 ======================================================== 00:11:47.021 Total : 21872.50 85.44 11709.94 2032.42 80819.41 00:11:47.021 00:11:47.021 02:28:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:47.278 02:28:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 067418c6-8d95-41c8-9fd5-cfda55c410fc 00:11:47.536 02:28:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54129676-2dd1-462f-9360-f1f7641c855a 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.794 rmmod nvme_tcp 00:11:47.794 rmmod nvme_fabrics 00:11:47.794 rmmod nvme_keyring 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2267783 ']' 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2267783 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2267783 ']' 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2267783 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2267783 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2267783' 00:11:47.794 killing process with pid 2267783 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2267783 00:11:47.794 [2024-05-15 02:28:35.183987] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:47.794 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2267783 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.362 02:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.265 00:11:50.265 real 0m19.238s 00:11:50.265 user 1m4.107s 00:11:50.265 sys 0m5.917s 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:50.265 ************************************ 00:11:50.265 END TEST nvmf_lvol 00:11:50.265 ************************************ 00:11:50.265 02:28:37 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:50.265 02:28:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:50.265 02:28:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.265 02:28:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.265 ************************************ 00:11:50.265 START TEST nvmf_lvs_grow 00:11:50.265 ************************************ 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:50.265 * Looking for test storage... 00:11:50.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.265 02:28:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.797 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.797 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.797 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.797 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.797 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.798 02:28:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:11:52.798 00:11:52.798 --- 10.0.0.2 ping statistics --- 00:11:52.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.798 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:52.798 00:11:52.798 --- 10.0.0.1 ping statistics --- 00:11:52.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.798 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2272024 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2272024 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 2272024 ']' 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:52.798 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:52.798 [2024-05-15 02:28:40.115406] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:11:52.798 [2024-05-15 02:28:40.115498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.798 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.798 [2024-05-15 02:28:40.194040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.056 [2024-05-15 02:28:40.309141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.056 [2024-05-15 02:28:40.309199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.056 [2024-05-15 02:28:40.309227] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.056 [2024-05-15 02:28:40.309238] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.056 [2024-05-15 02:28:40.309247] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.057 [2024-05-15 02:28:40.309287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.057 02:28:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:53.315 [2024-05-15 02:28:40.729407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.573 ************************************ 00:11:53.573 START TEST lvs_grow_clean 00:11:53.573 ************************************ 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:53.573 02:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:53.831 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:53.831 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:54.089 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:11:54.089 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:54.089 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:11:54.348 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:54.348 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:54.348 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b lvol 150 00:11:54.607 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc 00:11:54.607 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.607 02:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:54.865 [2024-05-15 02:28:42.030140] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:54.865 [2024-05-15 02:28:42.030245] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:54.865 true 00:11:54.865 02:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:11:54.865 02:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:55.151 02:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:55.151 02:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:55.151 02:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc 00:11:55.410 02:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:55.668 [2024-05-15 02:28:43.061080] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:55.668 [2024-05-15 02:28:43.061408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.668 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2272457 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2272457 /var/tmp/bdevperf.sock 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 2272457 ']' 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:56.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:56.234 02:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:56.234 [2024-05-15 02:28:43.417429] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:11:56.234 [2024-05-15 02:28:43.417515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272457 ] 00:11:56.234 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.234 [2024-05-15 02:28:43.495772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.234 [2024-05-15 02:28:43.606961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.166 02:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:57.166 02:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:11:57.166 02:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:57.425 Nvme0n1 00:11:57.425 02:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:57.682 [ 00:11:57.682 { 00:11:57.682 "name": "Nvme0n1", 00:11:57.682 "aliases": [ 00:11:57.682 "c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc" 00:11:57.682 ], 00:11:57.682 "product_name": "NVMe disk", 00:11:57.682 "block_size": 4096, 00:11:57.682 "num_blocks": 38912, 00:11:57.682 "uuid": "c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc", 00:11:57.682 "assigned_rate_limits": { 00:11:57.682 "rw_ios_per_sec": 0, 00:11:57.682 "rw_mbytes_per_sec": 0, 00:11:57.682 "r_mbytes_per_sec": 0, 00:11:57.682 "w_mbytes_per_sec": 0 00:11:57.682 }, 00:11:57.682 "claimed": false, 00:11:57.682 "zoned": false, 00:11:57.682 "supported_io_types": { 00:11:57.682 "read": true, 00:11:57.682 "write": true, 00:11:57.682 "unmap": true, 00:11:57.682 "write_zeroes": true, 00:11:57.682 "flush": true, 00:11:57.682 "reset": true, 00:11:57.682 "compare": true, 00:11:57.682 "compare_and_write": true, 00:11:57.682 "abort": true, 00:11:57.682 "nvme_admin": true, 00:11:57.682 "nvme_io": true 00:11:57.682 }, 00:11:57.682 "memory_domains": [ 00:11:57.682 { 00:11:57.682 "dma_device_id": "system", 00:11:57.682 "dma_device_type": 1 00:11:57.682 } 00:11:57.682 ], 00:11:57.682 "driver_specific": { 00:11:57.682 "nvme": [ 00:11:57.682 { 00:11:57.682 "trid": { 00:11:57.682 "trtype": "TCP", 00:11:57.682 "adrfam": "IPv4", 00:11:57.682 "traddr": "10.0.0.2", 00:11:57.682 "trsvcid": "4420", 00:11:57.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:57.682 }, 00:11:57.682 "ctrlr_data": { 00:11:57.682 "cntlid": 1, 00:11:57.682 "vendor_id": "0x8086", 00:11:57.682 "model_number": "SPDK bdev Controller", 00:11:57.682 "serial_number": "SPDK0", 00:11:57.682 "firmware_revision": "24.05", 00:11:57.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:57.682 "oacs": { 00:11:57.682 "security": 0, 00:11:57.682 "format": 0, 00:11:57.682 "firmware": 0, 00:11:57.682 "ns_manage": 0 00:11:57.682 }, 00:11:57.682 "multi_ctrlr": true, 00:11:57.682 "ana_reporting": false 00:11:57.682 }, 00:11:57.682 "vs": { 00:11:57.682 "nvme_version": "1.3" 00:11:57.682 }, 00:11:57.682 "ns_data": { 00:11:57.682 "id": 1, 00:11:57.682 "can_share": true 00:11:57.682 } 00:11:57.682 } 00:11:57.682 ], 00:11:57.682 "mp_policy": "active_passive" 00:11:57.682 } 00:11:57.682 } 00:11:57.682 ] 00:11:57.941 02:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2272726 00:11:57.941 02:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:57.941 02:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:57.941 Running I/O for 10 seconds... 00:11:58.874 Latency(us) 00:11:58.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.874 Nvme0n1 : 1.00 13445.00 52.52 0.00 0.00 0.00 0.00 0.00 00:11:58.874 =================================================================================================================== 00:11:58.874 Total : 13445.00 52.52 0.00 0.00 0.00 0.00 0.00 00:11:58.874 00:11:59.807 02:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:11:59.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.808 Nvme0n1 : 2.00 13674.50 53.42 0.00 0.00 0.00 0.00 0.00 00:11:59.808 =================================================================================================================== 00:11:59.808 Total : 13674.50 53.42 0.00 0.00 0.00 0.00 0.00 00:11:59.808 00:12:00.066 true 00:12:00.066 02:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:00.066 02:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:00.324 02:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:00.324 02:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:00.324 02:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2272726 00:12:00.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.889 Nvme0n1 : 3.00 13599.00 53.12 0.00 0.00 0.00 0.00 0.00 00:12:00.889 =================================================================================================================== 00:12:00.889 Total : 13599.00 53.12 0.00 0.00 0.00 0.00 0.00 00:12:00.889 00:12:01.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.823 Nvme0n1 : 4.00 13615.25 53.18 0.00 0.00 0.00 0.00 0.00 00:12:01.823 =================================================================================================================== 00:12:01.823 Total : 13615.25 53.18 0.00 0.00 0.00 0.00 0.00 00:12:01.823 00:12:03.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.195 Nvme0n1 : 5.00 13703.40 53.53 0.00 0.00 0.00 0.00 0.00 00:12:03.195 =================================================================================================================== 00:12:03.195 Total : 13703.40 53.53 0.00 0.00 0.00 0.00 0.00 00:12:03.195 00:12:04.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.128 Nvme0n1 : 6.00 13716.83 53.58 0.00 0.00 0.00 0.00 0.00 00:12:04.128 =================================================================================================================== 00:12:04.128 Total : 13716.83 53.58 0.00 0.00 0.00 0.00 0.00 00:12:04.128 00:12:05.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.061 Nvme0n1 : 7.00 13747.00 53.70 0.00 0.00 0.00 0.00 0.00 00:12:05.061 =================================================================================================================== 00:12:05.061 Total : 13747.00 53.70 0.00 0.00 0.00 0.00 0.00 00:12:05.061 00:12:05.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.993 Nvme0n1 : 8.00 13819.62 53.98 0.00 0.00 0.00 0.00 0.00 00:12:05.993 =================================================================================================================== 00:12:05.993 Total : 13819.62 53.98 0.00 0.00 0.00 0.00 0.00 00:12:05.993 00:12:06.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.933 Nvme0n1 : 9.00 13913.44 54.35 0.00 0.00 0.00 0.00 0.00 00:12:06.933 =================================================================================================================== 00:12:06.933 Total : 13913.44 54.35 0.00 0.00 0.00 0.00 0.00 00:12:06.933 00:12:07.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.868 Nvme0n1 : 10.00 13929.30 54.41 0.00 0.00 0.00 0.00 0.00 00:12:07.868 =================================================================================================================== 00:12:07.868 Total : 13929.30 54.41 0.00 0.00 0.00 0.00 0.00 00:12:07.868 00:12:07.868 00:12:07.868 Latency(us) 00:12:07.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.868 Nvme0n1 : 10.01 13928.83 54.41 0.00 0.00 9181.38 3932.16 15631.55 00:12:07.868 =================================================================================================================== 00:12:07.868 Total : 13928.83 54.41 0.00 0.00 9181.38 3932.16 15631.55 00:12:07.868 0 00:12:07.868 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2272457 00:12:07.868 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 2272457 ']' 00:12:07.868 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 2272457 00:12:07.868 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:12:07.868 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:07.869 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2272457 00:12:07.869 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:07.869 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:07.869 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2272457' 00:12:07.869 killing process with pid 2272457 00:12:07.869 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 2272457 00:12:07.869 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.869 00:12:07.869 Latency(us) 00:12:07.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.869 =================================================================================================================== 00:12:07.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:07.869 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 2272457 00:12:08.127 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.693 02:28:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:08.952 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:08.952 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:09.211 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:09.211 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:09.211 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.212 [2024-05-15 02:28:56.612675] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:09.471 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:09.471 request: 00:12:09.471 { 00:12:09.471 "uuid": "6ea4d1d0-2823-4315-8f35-a53f95b9e11b", 00:12:09.471 "method": "bdev_lvol_get_lvstores", 00:12:09.471 "req_id": 1 00:12:09.471 } 00:12:09.471 Got JSON-RPC error response 00:12:09.471 response: 00:12:09.471 { 00:12:09.471 "code": -19, 00:12:09.471 "message": "No such device" 00:12:09.471 } 00:12:09.778 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:09.778 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.778 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.778 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.778 02:28:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.778 aio_bdev 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:09.778 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:10.036 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc -t 2000 00:12:10.294 [ 00:12:10.294 { 00:12:10.294 "name": "c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc", 00:12:10.294 "aliases": [ 00:12:10.294 "lvs/lvol" 00:12:10.294 ], 00:12:10.294 "product_name": "Logical Volume", 00:12:10.294 "block_size": 4096, 00:12:10.294 "num_blocks": 38912, 00:12:10.294 "uuid": "c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc", 00:12:10.294 "assigned_rate_limits": { 00:12:10.294 "rw_ios_per_sec": 0, 00:12:10.294 "rw_mbytes_per_sec": 0, 00:12:10.294 "r_mbytes_per_sec": 0, 00:12:10.294 "w_mbytes_per_sec": 0 00:12:10.294 }, 00:12:10.294 "claimed": false, 00:12:10.294 "zoned": false, 00:12:10.294 "supported_io_types": { 00:12:10.294 "read": true, 00:12:10.294 "write": true, 00:12:10.294 "unmap": true, 00:12:10.294 "write_zeroes": true, 00:12:10.294 "flush": false, 00:12:10.294 "reset": true, 00:12:10.294 "compare": false, 00:12:10.295 "compare_and_write": false, 00:12:10.295 "abort": false, 00:12:10.295 "nvme_admin": false, 00:12:10.295 "nvme_io": false 00:12:10.295 }, 00:12:10.295 "driver_specific": { 00:12:10.295 "lvol": { 00:12:10.295 "lvol_store_uuid": "6ea4d1d0-2823-4315-8f35-a53f95b9e11b", 00:12:10.295 "base_bdev": "aio_bdev", 00:12:10.295 "thin_provision": false, 00:12:10.295 "num_allocated_clusters": 38, 00:12:10.295 "snapshot": false, 00:12:10.295 "clone": false, 00:12:10.295 "esnap_clone": false 00:12:10.295 } 00:12:10.295 } 00:12:10.295 } 00:12:10.295 ] 00:12:10.295 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:12:10.295 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:10.295 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:10.553 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:10.553 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:10.553 02:28:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:10.812 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:10.812 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2981fe3-1dd4-48c7-8fdb-41991ef0e3bc 00:12:11.070 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ea4d1d0-2823-4315-8f35-a53f95b9e11b 00:12:11.330 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.588 00:12:11.588 real 0m18.139s 00:12:11.588 user 0m12.496s 00:12:11.588 sys 0m3.941s 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:11.588 ************************************ 00:12:11.588 END TEST lvs_grow_clean 00:12:11.588 ************************************ 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.588 ************************************ 00:12:11.588 START TEST lvs_grow_dirty 00:12:11.588 ************************************ 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:11.588 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:11.589 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:11.589 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:11.589 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.589 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.589 02:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.156 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:12.156 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:12.156 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:12.156 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:12.156 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:12.414 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:12.414 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:12.415 02:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 lvol 150 00:12:12.673 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=167979e4-866a-442d-8252-2f38227e30da 00:12:12.673 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.673 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:12.931 [2024-05-15 02:29:00.292253] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:12.931 [2024-05-15 02:29:00.292349] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:12.931 true 00:12:12.931 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:12.931 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:13.190 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:13.190 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:13.448 02:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 167979e4-866a-442d-8252-2f38227e30da 00:12:13.706 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:13.965 [2024-05-15 02:29:01.295298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.965 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2276384 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2276384 /var/tmp/bdevperf.sock 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2276384 ']' 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:14.223 02:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:14.223 [2024-05-15 02:29:01.601251] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:14.223 [2024-05-15 02:29:01.601332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276384 ] 00:12:14.482 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.482 [2024-05-15 02:29:01.678045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.482 [2024-05-15 02:29:01.790601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.416 02:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.416 02:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:15.416 02:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:15.674 Nvme0n1 00:12:15.674 02:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:15.932 [ 00:12:15.932 { 00:12:15.932 "name": "Nvme0n1", 00:12:15.932 "aliases": [ 00:12:15.932 "167979e4-866a-442d-8252-2f38227e30da" 00:12:15.932 ], 00:12:15.932 "product_name": "NVMe disk", 00:12:15.932 "block_size": 4096, 00:12:15.932 "num_blocks": 38912, 00:12:15.932 "uuid": "167979e4-866a-442d-8252-2f38227e30da", 00:12:15.932 "assigned_rate_limits": { 00:12:15.932 "rw_ios_per_sec": 0, 00:12:15.932 "rw_mbytes_per_sec": 0, 00:12:15.932 "r_mbytes_per_sec": 0, 00:12:15.932 "w_mbytes_per_sec": 0 00:12:15.932 }, 00:12:15.932 "claimed": false, 00:12:15.932 "zoned": false, 00:12:15.932 "supported_io_types": { 00:12:15.932 "read": true, 00:12:15.932 "write": true, 00:12:15.932 "unmap": true, 00:12:15.932 "write_zeroes": true, 00:12:15.932 "flush": true, 00:12:15.932 "reset": true, 00:12:15.932 "compare": true, 00:12:15.932 "compare_and_write": true, 00:12:15.932 "abort": true, 00:12:15.932 "nvme_admin": true, 00:12:15.932 "nvme_io": true 00:12:15.932 }, 00:12:15.932 "memory_domains": [ 00:12:15.932 { 00:12:15.932 "dma_device_id": "system", 00:12:15.932 "dma_device_type": 1 00:12:15.932 } 00:12:15.932 ], 00:12:15.932 "driver_specific": { 00:12:15.932 "nvme": [ 00:12:15.932 { 00:12:15.932 "trid": { 00:12:15.932 "trtype": "TCP", 00:12:15.932 "adrfam": "IPv4", 00:12:15.932 "traddr": "10.0.0.2", 00:12:15.932 "trsvcid": "4420", 00:12:15.932 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:15.932 }, 00:12:15.932 "ctrlr_data": { 00:12:15.932 "cntlid": 1, 00:12:15.932 "vendor_id": "0x8086", 00:12:15.932 "model_number": "SPDK bdev Controller", 00:12:15.932 "serial_number": "SPDK0", 00:12:15.932 "firmware_revision": "24.05", 00:12:15.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.932 "oacs": { 00:12:15.932 "security": 0, 00:12:15.932 "format": 0, 00:12:15.932 "firmware": 0, 00:12:15.932 "ns_manage": 0 00:12:15.932 }, 00:12:15.932 "multi_ctrlr": true, 00:12:15.932 "ana_reporting": false 00:12:15.932 }, 00:12:15.932 "vs": { 00:12:15.932 "nvme_version": "1.3" 00:12:15.932 }, 00:12:15.932 "ns_data": { 00:12:15.932 "id": 1, 00:12:15.932 "can_share": true 00:12:15.932 } 00:12:15.932 } 00:12:15.932 ], 00:12:15.932 "mp_policy": "active_passive" 00:12:15.932 } 00:12:15.932 } 00:12:15.932 ] 00:12:15.932 02:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2276650 00:12:15.932 02:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:15.932 02:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:15.932 Running I/O for 10 seconds... 00:12:17.307 Latency(us) 00:12:17.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.307 Nvme0n1 : 1.00 13267.00 51.82 0.00 0.00 0.00 0.00 0.00 00:12:17.307 =================================================================================================================== 00:12:17.307 Total : 13267.00 51.82 0.00 0.00 0.00 0.00 0.00 00:12:17.307 00:12:17.872 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:18.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.130 Nvme0n1 : 2.00 13341.50 52.12 0.00 0.00 0.00 0.00 0.00 00:12:18.130 =================================================================================================================== 00:12:18.130 Total : 13341.50 52.12 0.00 0.00 0.00 0.00 0.00 00:12:18.130 00:12:18.130 true 00:12:18.130 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:18.130 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:18.388 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:18.388 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:18.388 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2276650 00:12:18.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.954 Nvme0n1 : 3.00 13438.33 52.49 0.00 0.00 0.00 0.00 0.00 00:12:18.954 =================================================================================================================== 00:12:18.954 Total : 13438.33 52.49 0.00 0.00 0.00 0.00 0.00 00:12:18.954 00:12:20.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.328 Nvme0n1 : 4.00 13522.75 52.82 0.00 0.00 0.00 0.00 0.00 00:12:20.328 =================================================================================================================== 00:12:20.328 Total : 13522.75 52.82 0.00 0.00 0.00 0.00 0.00 00:12:20.328 00:12:21.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.263 Nvme0n1 : 5.00 13552.60 52.94 0.00 0.00 0.00 0.00 0.00 00:12:21.263 =================================================================================================================== 00:12:21.263 Total : 13552.60 52.94 0.00 0.00 0.00 0.00 0.00 00:12:21.263 00:12:22.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.199 Nvme0n1 : 6.00 13607.17 53.15 0.00 0.00 0.00 0.00 0.00 00:12:22.199 =================================================================================================================== 00:12:22.199 Total : 13607.17 53.15 0.00 0.00 0.00 0.00 0.00 00:12:22.199 00:12:23.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.132 Nvme0n1 : 7.00 13623.29 53.22 0.00 0.00 0.00 0.00 0.00 00:12:23.132 =================================================================================================================== 00:12:23.132 Total : 13623.29 53.22 0.00 0.00 0.00 0.00 0.00 00:12:23.132 00:12:24.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.130 Nvme0n1 : 8.00 13642.38 53.29 0.00 0.00 0.00 0.00 0.00 00:12:24.130 =================================================================================================================== 00:12:24.130 Total : 13642.38 53.29 0.00 0.00 0.00 0.00 0.00 00:12:24.130 00:12:25.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.068 Nvme0n1 : 9.00 13684.78 53.46 0.00 0.00 0.00 0.00 0.00 00:12:25.069 =================================================================================================================== 00:12:25.069 Total : 13684.78 53.46 0.00 0.00 0.00 0.00 0.00 00:12:25.069 00:12:26.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.000 Nvme0n1 : 10.00 13718.70 53.59 0.00 0.00 0.00 0.00 0.00 00:12:26.000 =================================================================================================================== 00:12:26.000 Total : 13718.70 53.59 0.00 0.00 0.00 0.00 0.00 00:12:26.000 00:12:26.000 00:12:26.000 Latency(us) 00:12:26.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.000 Nvme0n1 : 10.01 13719.02 53.59 0.00 0.00 9321.56 7378.87 18155.90 00:12:26.000 =================================================================================================================== 00:12:26.000 Total : 13719.02 53.59 0.00 0.00 9321.56 7378.87 18155.90 00:12:26.000 0 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2276384 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 2276384 ']' 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 2276384 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2276384 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2276384' 00:12:26.000 killing process with pid 2276384 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 2276384 00:12:26.000 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.000 00:12:26.000 Latency(us) 00:12:26.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.000 =================================================================================================================== 00:12:26.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.000 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 2276384 00:12:26.257 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.820 02:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:27.077 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:27.077 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2272024 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2272024 00:12:27.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2272024 Killed "${NVMF_APP[@]}" "$@" 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2277977 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2277977 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2277977 ']' 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:27.335 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:27.335 [2024-05-15 02:29:14.582895] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:27.335 [2024-05-15 02:29:14.583022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.335 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.335 [2024-05-15 02:29:14.658962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.593 [2024-05-15 02:29:14.767720] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.593 [2024-05-15 02:29:14.767784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.593 [2024-05-15 02:29:14.767812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.593 [2024-05-15 02:29:14.767823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.593 [2024-05-15 02:29:14.767833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.593 [2024-05-15 02:29:14.767857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.593 02:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.850 [2024-05-15 02:29:15.184098] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:27.850 [2024-05-15 02:29:15.184232] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:27.850 [2024-05-15 02:29:15.184292] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 167979e4-866a-442d-8252-2f38227e30da 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=167979e4-866a-442d-8252-2f38227e30da 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:27.850 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:28.108 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 167979e4-866a-442d-8252-2f38227e30da -t 2000 00:12:28.367 [ 00:12:28.367 { 00:12:28.367 "name": "167979e4-866a-442d-8252-2f38227e30da", 00:12:28.367 "aliases": [ 00:12:28.367 "lvs/lvol" 00:12:28.367 ], 00:12:28.367 "product_name": "Logical Volume", 00:12:28.367 "block_size": 4096, 00:12:28.367 "num_blocks": 38912, 00:12:28.367 "uuid": "167979e4-866a-442d-8252-2f38227e30da", 00:12:28.367 "assigned_rate_limits": { 00:12:28.367 "rw_ios_per_sec": 0, 00:12:28.367 "rw_mbytes_per_sec": 0, 00:12:28.367 "r_mbytes_per_sec": 0, 00:12:28.367 "w_mbytes_per_sec": 0 00:12:28.367 }, 00:12:28.367 "claimed": false, 00:12:28.367 "zoned": false, 00:12:28.367 "supported_io_types": { 00:12:28.367 "read": true, 00:12:28.367 "write": true, 00:12:28.367 "unmap": true, 00:12:28.367 "write_zeroes": true, 00:12:28.367 "flush": false, 00:12:28.367 "reset": true, 00:12:28.367 "compare": false, 00:12:28.367 "compare_and_write": false, 00:12:28.367 "abort": false, 00:12:28.367 "nvme_admin": false, 00:12:28.367 "nvme_io": false 00:12:28.367 }, 00:12:28.367 "driver_specific": { 00:12:28.367 "lvol": { 00:12:28.367 "lvol_store_uuid": "fbab7868-35c3-4542-afc6-f96ba5ff9fb6", 00:12:28.367 "base_bdev": "aio_bdev", 00:12:28.367 "thin_provision": false, 00:12:28.367 "num_allocated_clusters": 38, 00:12:28.367 "snapshot": false, 00:12:28.367 "clone": false, 00:12:28.367 "esnap_clone": false 00:12:28.367 } 00:12:28.367 } 00:12:28.367 } 00:12:28.367 ] 00:12:28.367 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:28.367 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:28.367 02:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:28.625 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:28.625 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:28.625 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:28.884 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:28.884 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:29.141 [2024-05-15 02:29:16.481170] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:29.141 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:29.400 request: 00:12:29.400 { 00:12:29.400 "uuid": "fbab7868-35c3-4542-afc6-f96ba5ff9fb6", 00:12:29.400 "method": "bdev_lvol_get_lvstores", 00:12:29.400 "req_id": 1 00:12:29.400 } 00:12:29.400 Got JSON-RPC error response 00:12:29.400 response: 00:12:29.400 { 00:12:29.400 "code": -19, 00:12:29.400 "message": "No such device" 00:12:29.400 } 00:12:29.400 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:29.400 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.400 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.400 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.400 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:29.658 aio_bdev 00:12:29.658 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 167979e4-866a-442d-8252-2f38227e30da 00:12:29.658 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=167979e4-866a-442d-8252-2f38227e30da 00:12:29.658 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:29.658 02:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:29.658 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:29.658 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:29.658 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.916 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 167979e4-866a-442d-8252-2f38227e30da -t 2000 00:12:30.174 [ 00:12:30.174 { 00:12:30.174 "name": "167979e4-866a-442d-8252-2f38227e30da", 00:12:30.174 "aliases": [ 00:12:30.174 "lvs/lvol" 00:12:30.174 ], 00:12:30.174 "product_name": "Logical Volume", 00:12:30.174 "block_size": 4096, 00:12:30.174 "num_blocks": 38912, 00:12:30.174 "uuid": "167979e4-866a-442d-8252-2f38227e30da", 00:12:30.174 "assigned_rate_limits": { 00:12:30.174 "rw_ios_per_sec": 0, 00:12:30.174 "rw_mbytes_per_sec": 0, 00:12:30.174 "r_mbytes_per_sec": 0, 00:12:30.174 "w_mbytes_per_sec": 0 00:12:30.174 }, 00:12:30.174 "claimed": false, 00:12:30.174 "zoned": false, 00:12:30.174 "supported_io_types": { 00:12:30.174 "read": true, 00:12:30.174 "write": true, 00:12:30.174 "unmap": true, 00:12:30.174 "write_zeroes": true, 00:12:30.174 "flush": false, 00:12:30.174 "reset": true, 00:12:30.174 "compare": false, 00:12:30.174 "compare_and_write": false, 00:12:30.174 "abort": false, 00:12:30.174 "nvme_admin": false, 00:12:30.174 "nvme_io": false 00:12:30.174 }, 00:12:30.174 "driver_specific": { 00:12:30.174 "lvol": { 00:12:30.174 "lvol_store_uuid": "fbab7868-35c3-4542-afc6-f96ba5ff9fb6", 00:12:30.174 "base_bdev": "aio_bdev", 00:12:30.174 "thin_provision": false, 00:12:30.174 "num_allocated_clusters": 38, 00:12:30.174 "snapshot": false, 00:12:30.174 "clone": false, 00:12:30.174 "esnap_clone": false 00:12:30.174 } 00:12:30.174 } 00:12:30.174 } 00:12:30.174 ] 00:12:30.174 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:30.174 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:30.174 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:30.432 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:30.432 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:30.432 02:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:30.689 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:30.689 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 167979e4-866a-442d-8252-2f38227e30da 00:12:30.947 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbab7868-35c3-4542-afc6-f96ba5ff9fb6 00:12:31.206 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.464 00:12:31.464 real 0m19.816s 00:12:31.464 user 0m50.327s 00:12:31.464 sys 0m5.105s 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.464 ************************************ 00:12:31.464 END TEST lvs_grow_dirty 00:12:31.464 ************************************ 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:31.464 nvmf_trace.0 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.464 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.464 rmmod nvme_tcp 00:12:31.464 rmmod nvme_fabrics 00:12:31.722 rmmod nvme_keyring 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2277977 ']' 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2277977 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 2277977 ']' 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 2277977 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2277977 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2277977' 00:12:31.722 killing process with pid 2277977 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 2277977 00:12:31.722 02:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 2277977 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.981 02:29:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.883 02:29:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.883 00:12:33.883 real 0m43.660s 00:12:33.883 user 1m8.697s 00:12:33.883 sys 0m11.118s 00:12:33.883 02:29:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:33.883 02:29:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.883 ************************************ 00:12:33.883 END TEST nvmf_lvs_grow 00:12:33.883 ************************************ 00:12:33.883 02:29:21 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:33.883 02:29:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:33.883 02:29:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:33.883 02:29:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.142 ************************************ 00:12:34.142 START TEST nvmf_bdev_io_wait 00:12:34.142 ************************************ 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:34.142 * Looking for test storage... 00:12:34.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.142 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:34.143 02:29:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.674 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.675 02:29:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:36.675 00:12:36.675 --- 10.0.0.2 ping statistics --- 00:12:36.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.675 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:12:36.675 00:12:36.675 --- 10.0.0.1 ping statistics --- 00:12:36.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.675 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2280799 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2280799 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 2280799 ']' 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.675 02:29:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 [2024-05-15 02:29:24.092325] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:36.933 [2024-05-15 02:29:24.092411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.933 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.933 [2024-05-15 02:29:24.173429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.933 [2024-05-15 02:29:24.292440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.933 [2024-05-15 02:29:24.292510] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.933 [2024-05-15 02:29:24.292527] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.933 [2024-05-15 02:29:24.292540] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.933 [2024-05-15 02:29:24.292552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.933 [2024-05-15 02:29:24.292638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.933 [2024-05-15 02:29:24.292698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.933 [2024-05-15 02:29:24.292796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.933 [2024-05-15 02:29:24.292799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 [2024-05-15 02:29:25.131157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 Malloc0 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 [2024-05-15 02:29:25.196464] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:37.866 [2024-05-15 02:29:25.196764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2280954 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2280956 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2280958 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.866 { 00:12:37.866 "params": { 00:12:37.866 "name": "Nvme$subsystem", 00:12:37.866 "trtype": "$TEST_TRANSPORT", 00:12:37.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.866 "adrfam": "ipv4", 00:12:37.866 "trsvcid": "$NVMF_PORT", 00:12:37.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.866 "hdgst": ${hdgst:-false}, 00:12:37.866 "ddgst": ${ddgst:-false} 00:12:37.866 }, 00:12:37.866 "method": "bdev_nvme_attach_controller" 00:12:37.866 } 00:12:37.866 EOF 00:12:37.866 )") 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.866 { 00:12:37.866 "params": { 00:12:37.866 "name": "Nvme$subsystem", 00:12:37.866 "trtype": "$TEST_TRANSPORT", 00:12:37.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.866 "adrfam": "ipv4", 00:12:37.866 "trsvcid": "$NVMF_PORT", 00:12:37.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.866 "hdgst": ${hdgst:-false}, 00:12:37.866 "ddgst": ${ddgst:-false} 00:12:37.866 }, 00:12:37.866 "method": "bdev_nvme_attach_controller" 00:12:37.866 } 00:12:37.866 EOF 00:12:37.866 )") 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2280960 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:37.866 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.867 { 00:12:37.867 "params": { 00:12:37.867 "name": "Nvme$subsystem", 00:12:37.867 "trtype": "$TEST_TRANSPORT", 00:12:37.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.867 "adrfam": "ipv4", 00:12:37.867 "trsvcid": "$NVMF_PORT", 00:12:37.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.867 "hdgst": ${hdgst:-false}, 00:12:37.867 "ddgst": ${ddgst:-false} 00:12:37.867 }, 00:12:37.867 "method": "bdev_nvme_attach_controller" 00:12:37.867 } 00:12:37.867 EOF 00:12:37.867 )") 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.867 { 00:12:37.867 "params": { 00:12:37.867 "name": "Nvme$subsystem", 00:12:37.867 "trtype": "$TEST_TRANSPORT", 00:12:37.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.867 "adrfam": "ipv4", 00:12:37.867 "trsvcid": "$NVMF_PORT", 00:12:37.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.867 "hdgst": ${hdgst:-false}, 00:12:37.867 "ddgst": ${ddgst:-false} 00:12:37.867 }, 00:12:37.867 "method": "bdev_nvme_attach_controller" 00:12:37.867 } 00:12:37.867 EOF 00:12:37.867 )") 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2280954 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.867 "params": { 00:12:37.867 "name": "Nvme1", 00:12:37.867 "trtype": "tcp", 00:12:37.867 "traddr": "10.0.0.2", 00:12:37.867 "adrfam": "ipv4", 00:12:37.867 "trsvcid": "4420", 00:12:37.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.867 "hdgst": false, 00:12:37.867 "ddgst": false 00:12:37.867 }, 00:12:37.867 "method": "bdev_nvme_attach_controller" 00:12:37.867 }' 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.867 "params": { 00:12:37.867 "name": "Nvme1", 00:12:37.867 "trtype": "tcp", 00:12:37.867 "traddr": "10.0.0.2", 00:12:37.867 "adrfam": "ipv4", 00:12:37.867 "trsvcid": "4420", 00:12:37.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.867 "hdgst": false, 00:12:37.867 "ddgst": false 00:12:37.867 }, 00:12:37.867 "method": "bdev_nvme_attach_controller" 00:12:37.867 }' 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.867 "params": { 00:12:37.867 "name": "Nvme1", 00:12:37.867 "trtype": "tcp", 00:12:37.867 "traddr": "10.0.0.2", 00:12:37.867 "adrfam": "ipv4", 00:12:37.867 "trsvcid": "4420", 00:12:37.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.867 "hdgst": false, 00:12:37.867 "ddgst": false 00:12:37.867 }, 00:12:37.867 "method": "bdev_nvme_attach_controller" 00:12:37.867 }' 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.867 02:29:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.867 "params": { 00:12:37.867 "name": "Nvme1", 00:12:37.867 "trtype": "tcp", 00:12:37.867 "traddr": "10.0.0.2", 00:12:37.867 "adrfam": "ipv4", 00:12:37.867 "trsvcid": "4420", 00:12:37.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.867 "hdgst": false, 00:12:37.867 "ddgst": false 00:12:37.867 }, 00:12:37.867 "method": "bdev_nvme_attach_controller" 00:12:37.867 }' 00:12:37.867 [2024-05-15 02:29:25.242916] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:37.867 [2024-05-15 02:29:25.242922] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:37.867 [2024-05-15 02:29:25.242916] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:37.867 [2024-05-15 02:29:25.243016] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 02:29:25.243016] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 02:29:25.243017] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:37.867 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:37.867 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:37.867 [2024-05-15 02:29:25.243662] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:37.867 [2024-05-15 02:29:25.243727] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:38.125 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.125 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.125 [2024-05-15 02:29:25.429213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.125 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.125 [2024-05-15 02:29:25.525214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:38.125 [2024-05-15 02:29:25.528120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.395 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.395 [2024-05-15 02:29:25.625942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.395 [2024-05-15 02:29:25.627148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:38.395 [2024-05-15 02:29:25.701155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.395 [2024-05-15 02:29:25.724608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:38.395 [2024-05-15 02:29:25.794690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:38.675 Running I/O for 1 seconds... 00:12:38.675 Running I/O for 1 seconds... 00:12:38.675 Running I/O for 1 seconds... 00:12:38.933 Running I/O for 1 seconds... 00:12:39.866 00:12:39.866 Latency(us) 00:12:39.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.866 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:39.866 Nvme1n1 : 1.01 10622.82 41.50 0.00 0.00 11995.23 8495.41 20000.62 00:12:39.866 =================================================================================================================== 00:12:39.866 Total : 10622.82 41.50 0.00 0.00 11995.23 8495.41 20000.62 00:12:39.866 00:12:39.866 Latency(us) 00:12:39.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.866 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:39.866 Nvme1n1 : 1.00 191647.80 748.62 0.00 0.00 665.25 271.55 916.29 00:12:39.866 =================================================================================================================== 00:12:39.866 Total : 191647.80 748.62 0.00 0.00 665.25 271.55 916.29 00:12:39.866 00:12:39.866 Latency(us) 00:12:39.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.866 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:39.866 Nvme1n1 : 1.01 7187.77 28.08 0.00 0.00 17729.75 6140.97 25631.86 00:12:39.866 =================================================================================================================== 00:12:39.866 Total : 7187.77 28.08 0.00 0.00 17729.75 6140.97 25631.86 00:12:39.866 00:12:39.866 Latency(us) 00:12:39.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.866 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:39.866 Nvme1n1 : 1.01 9851.31 38.48 0.00 0.00 12953.97 4903.06 21359.88 00:12:39.866 =================================================================================================================== 00:12:39.867 Total : 9851.31 38.48 0.00 0.00 12953.97 4903.06 21359.88 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2280956 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2280958 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2280960 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.124 rmmod nvme_tcp 00:12:40.124 rmmod nvme_fabrics 00:12:40.124 rmmod nvme_keyring 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.124 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2280799 ']' 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2280799 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 2280799 ']' 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 2280799 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2280799 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2280799' 00:12:40.125 killing process with pid 2280799 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 2280799 00:12:40.125 [2024-05-15 02:29:27.521651] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:40.125 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 2280799 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.382 02:29:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.915 02:29:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.915 00:12:42.915 real 0m8.501s 00:12:42.915 user 0m20.689s 00:12:42.915 sys 0m3.890s 00:12:42.915 02:29:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.915 02:29:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.915 ************************************ 00:12:42.915 END TEST nvmf_bdev_io_wait 00:12:42.915 ************************************ 00:12:42.915 02:29:29 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:42.915 02:29:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:42.915 02:29:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.915 02:29:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.915 ************************************ 00:12:42.915 START TEST nvmf_queue_depth 00:12:42.915 ************************************ 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:42.915 * Looking for test storage... 00:12:42.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.915 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.916 02:29:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.449 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.449 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.449 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.450 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:45.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:12:45.450 00:12:45.450 --- 10.0.0.2 ping statistics --- 00:12:45.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.450 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:45.450 00:12:45.450 --- 10.0.0.1 ping statistics --- 00:12:45.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.450 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2283537 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2283537 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2283537 ']' 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:45.450 02:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 [2024-05-15 02:29:32.634395] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:45.450 [2024-05-15 02:29:32.634472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.450 [2024-05-15 02:29:32.715263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.450 [2024-05-15 02:29:32.830197] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.450 [2024-05-15 02:29:32.830272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.450 [2024-05-15 02:29:32.830289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.450 [2024-05-15 02:29:32.830303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.450 [2024-05-15 02:29:32.830315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.450 [2024-05-15 02:29:32.830356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 [2024-05-15 02:29:33.586818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 Malloc0 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 [2024-05-15 02:29:33.647567] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:46.386 [2024-05-15 02:29:33.647848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2283631 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2283631 /var/tmp/bdevperf.sock 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2283631 ']' 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:46.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:46.386 02:29:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.386 [2024-05-15 02:29:33.692798] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:12:46.386 [2024-05-15 02:29:33.692874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283631 ] 00:12:46.386 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.386 [2024-05-15 02:29:33.769527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.644 [2024-05-15 02:29:33.886324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:47.576 NVMe0n1 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.576 02:29:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:47.576 Running I/O for 10 seconds... 00:12:59.775 00:12:59.775 Latency(us) 00:12:59.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.775 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:59.775 Verification LBA range: start 0x0 length 0x4000 00:12:59.775 NVMe0n1 : 10.07 8548.45 33.39 0.00 0.00 119288.67 10194.49 83109.36 00:12:59.775 =================================================================================================================== 00:12:59.775 Total : 8548.45 33.39 0.00 0.00 119288.67 10194.49 83109.36 00:12:59.775 0 00:12:59.775 02:29:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2283631 00:12:59.775 02:29:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2283631 ']' 00:12:59.775 02:29:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2283631 00:12:59.775 02:29:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:59.775 02:29:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:59.775 02:29:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2283631 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2283631' 00:12:59.775 killing process with pid 2283631 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2283631 00:12:59.775 Received shutdown signal, test time was about 10.000000 seconds 00:12:59.775 00:12:59.775 Latency(us) 00:12:59.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.775 =================================================================================================================== 00:12:59.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2283631 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.775 rmmod nvme_tcp 00:12:59.775 rmmod nvme_fabrics 00:12:59.775 rmmod nvme_keyring 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2283537 ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2283537 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2283537 ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2283537 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2283537 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2283537' 00:12:59.775 killing process with pid 2283537 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2283537 00:12:59.775 [2024-05-15 02:29:45.389488] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2283537 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.775 02:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.341 02:29:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.600 00:13:00.600 real 0m17.896s 00:13:00.600 user 0m25.124s 00:13:00.600 sys 0m3.504s 00:13:00.600 02:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.600 02:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 ************************************ 00:13:00.600 END TEST nvmf_queue_depth 00:13:00.600 ************************************ 00:13:00.600 02:29:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:00.600 02:29:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.600 02:29:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.600 02:29:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 ************************************ 00:13:00.600 START TEST nvmf_target_multipath 00:13:00.600 ************************************ 00:13:00.600 02:29:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:00.600 * Looking for test storage... 00:13:00.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.600 02:29:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.600 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.601 02:29:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.131 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:03.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:03.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:03.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:03.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.132 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:13:03.132 00:13:03.132 --- 10.0.0.2 ping statistics --- 00:13:03.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.133 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:13:03.133 00:13:03.133 --- 10.0.0.1 ping statistics --- 00:13:03.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.133 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.133 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:03.391 only one NIC for nvmf test 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.391 rmmod nvme_tcp 00:13:03.391 rmmod nvme_fabrics 00:13:03.391 rmmod nvme_keyring 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.391 02:29:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.334 00:13:05.334 real 0m4.851s 00:13:05.334 user 0m1.048s 00:13:05.334 sys 0m1.822s 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.334 02:29:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:05.334 ************************************ 00:13:05.334 END TEST nvmf_target_multipath 00:13:05.334 ************************************ 00:13:05.334 02:29:52 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:05.334 02:29:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:05.334 02:29:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.334 02:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:05.334 ************************************ 00:13:05.334 START TEST nvmf_zcopy 00:13:05.334 ************************************ 00:13:05.334 02:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:05.593 * Looking for test storage... 00:13:05.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.593 02:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.120 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:13:08.121 00:13:08.121 --- 10.0.0.2 ping statistics --- 00:13:08.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.121 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:13:08.121 00:13:08.121 --- 10.0.0.1 ping statistics --- 00:13:08.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.121 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2289631 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2289631 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 2289631 ']' 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:08.121 02:29:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 [2024-05-15 02:29:55.446431] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:13:08.121 [2024-05-15 02:29:55.446516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.121 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.121 [2024-05-15 02:29:55.526916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.379 [2024-05-15 02:29:55.641958] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.379 [2024-05-15 02:29:55.642033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.379 [2024-05-15 02:29:55.642049] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.379 [2024-05-15 02:29:55.642063] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.379 [2024-05-15 02:29:55.642075] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.379 [2024-05-15 02:29:55.642104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 [2024-05-15 02:29:56.438759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 [2024-05-15 02:29:56.454696] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:09.311 [2024-05-15 02:29:56.454994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 malloc0 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:09.311 { 00:13:09.311 "params": { 00:13:09.311 "name": "Nvme$subsystem", 00:13:09.311 "trtype": "$TEST_TRANSPORT", 00:13:09.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:09.311 "adrfam": "ipv4", 00:13:09.311 "trsvcid": "$NVMF_PORT", 00:13:09.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:09.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:09.311 "hdgst": ${hdgst:-false}, 00:13:09.311 "ddgst": ${ddgst:-false} 00:13:09.311 }, 00:13:09.311 "method": "bdev_nvme_attach_controller" 00:13:09.311 } 00:13:09.311 EOF 00:13:09.311 )") 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:09.311 02:29:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:09.311 "params": { 00:13:09.311 "name": "Nvme1", 00:13:09.311 "trtype": "tcp", 00:13:09.311 "traddr": "10.0.0.2", 00:13:09.311 "adrfam": "ipv4", 00:13:09.311 "trsvcid": "4420", 00:13:09.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.311 "hdgst": false, 00:13:09.311 "ddgst": false 00:13:09.311 }, 00:13:09.311 "method": "bdev_nvme_attach_controller" 00:13:09.311 }' 00:13:09.311 [2024-05-15 02:29:56.527401] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:13:09.311 [2024-05-15 02:29:56.527493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289783 ] 00:13:09.311 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.311 [2024-05-15 02:29:56.599792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.311 [2024-05-15 02:29:56.715483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.568 Running I/O for 10 seconds... 00:13:21.758 00:13:21.758 Latency(us) 00:13:21.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:21.758 Verification LBA range: start 0x0 length 0x1000 00:13:21.758 Nvme1n1 : 10.05 6021.92 47.05 0.00 0.00 21115.04 1517.04 41943.04 00:13:21.758 =================================================================================================================== 00:13:21.758 Total : 6021.92 47.05 0.00 0.00 21115.04 1517.04 41943.04 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2291086 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:21.758 { 00:13:21.758 "params": { 00:13:21.758 "name": "Nvme$subsystem", 00:13:21.758 "trtype": "$TEST_TRANSPORT", 00:13:21.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.758 "adrfam": "ipv4", 00:13:21.758 "trsvcid": "$NVMF_PORT", 00:13:21.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.758 "hdgst": ${hdgst:-false}, 00:13:21.758 "ddgst": ${ddgst:-false} 00:13:21.758 }, 00:13:21.758 "method": "bdev_nvme_attach_controller" 00:13:21.758 } 00:13:21.758 EOF 00:13:21.758 )") 00:13:21.758 [2024-05-15 02:30:07.304393] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.758 [2024-05-15 02:30:07.304441] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:21.758 02:30:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:21.758 "params": { 00:13:21.758 "name": "Nvme1", 00:13:21.758 "trtype": "tcp", 00:13:21.758 "traddr": "10.0.0.2", 00:13:21.758 "adrfam": "ipv4", 00:13:21.758 "trsvcid": "4420", 00:13:21.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.758 "hdgst": false, 00:13:21.758 "ddgst": false 00:13:21.758 }, 00:13:21.758 "method": "bdev_nvme_attach_controller" 00:13:21.758 }' 00:13:21.758 [2024-05-15 02:30:07.312322] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.312350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.320336] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.320361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.328358] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.328383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.336379] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.336403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.344400] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.344424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.345035] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:13:21.759 [2024-05-15 02:30:07.345112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291086 ] 00:13:21.759 [2024-05-15 02:30:07.352420] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.352445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.360442] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.360475] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.368463] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.368488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.376485] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.376511] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.759 [2024-05-15 02:30:07.384509] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.384533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.392530] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.392562] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.400551] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.400575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.408573] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.408597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.416597] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.416621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.421503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.759 [2024-05-15 02:30:07.424621] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.424646] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.432678] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.432718] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.440669] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.440696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.448687] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.448712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.456707] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.456732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.464728] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.464752] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.472750] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.472774] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.480772] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.480797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.488825] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.488860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.496849] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.496887] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.504845] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.504882] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.512859] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.512884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.520883] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.520908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.528906] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.528939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.536928] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.536983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.543419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.759 [2024-05-15 02:30:07.544957] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.544995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.552991] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.553012] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.561043] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.561075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.569081] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.569120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.577081] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.577118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.585094] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.585132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.593119] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.593156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.601134] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.601171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.609130] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.609156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.617150] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.617176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.625217] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.625254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.633243] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.633295] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.641226] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.641251] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.649246] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.649282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.657293] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.657321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.665320] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.665349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.673345] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.673372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.681367] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.681395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.689386] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.689414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.697404] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.697432] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.705426] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.705453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.713444] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.713470] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 [2024-05-15 02:30:07.721475] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.721505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.759 Running I/O for 5 seconds... 00:13:21.759 [2024-05-15 02:30:07.729493] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.759 [2024-05-15 02:30:07.729519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.740573] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.740605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.750665] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.750698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.763355] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.763387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.774620] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.774653] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.786842] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.786873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.798395] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.798426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.809238] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.809281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.821079] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.821108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.832060] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.832088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.843997] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.844025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.854663] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.854699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.866711] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.866743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.877431] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.877461] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.887009] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.887036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.898536] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.898565] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.908861] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.908889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.920054] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.920082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.930964] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.931008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.940863] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.940891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.951843] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.951870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.964208] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.964236] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.973589] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.973618] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.984361] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.984389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:07.994876] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:07.994904] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.004967] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.004995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.015646] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.015674] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.025809] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.025835] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.036682] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.036709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.046943] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.046970] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.057336] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.057379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.069962] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.070000] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.079252] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.079287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.089707] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.089735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.099798] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.099825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.110400] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.110427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.120482] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.120509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.131711] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.131739] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.141863] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.141890] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.152363] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.152390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.162358] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.162386] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.173697] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.173724] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.183369] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.183397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.194379] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.194406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.206625] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.206652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.215631] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.215659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.226608] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.226636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.236879] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.236906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.247303] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.247331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.258215] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.258244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.269159] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.269187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.279793] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.279829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.290542] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.290570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.300747] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.300775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.311461] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.760 [2024-05-15 02:30:08.311489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.760 [2024-05-15 02:30:08.321684] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.321712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.332557] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.332586] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.347434] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.347463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.356704] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.356732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.367599] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.367626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.378000] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.378028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.387726] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.387754] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.398466] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.398493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.408573] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.408600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.418726] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.418753] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.429363] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.429391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.442131] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.442158] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.451460] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.451487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.462421] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.462449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.472623] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.472651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.483618] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.483652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.494214] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.494242] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.506753] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.506781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.516339] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.516367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.526851] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.526878] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.537340] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.537367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.547647] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.547673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.558114] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.558142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.570654] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.570682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.579663] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.579690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.590836] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.590863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.601313] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.601340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.611939] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.611966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.622207] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.622234] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.632169] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.632196] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.643155] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.643182] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.652648] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.652675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.664007] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.664034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.673679] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.673705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.685277] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.685312] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.695233] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.695260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.705942] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.705968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.716644] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.716671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.726791] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.726818] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.736826] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.736868] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.747695] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.747722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.759739] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.759766] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.768379] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.768405] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.781324] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.781351] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.790903] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.790950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.801786] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.801814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.811751] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.811778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.822584] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.822611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.833020] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.833049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.843482] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.843510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.853991] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.854020] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.864697] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.864726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.875460] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.875489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.885273] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.885318] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.761 [2024-05-15 02:30:08.896417] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.761 [2024-05-15 02:30:08.896445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.906467] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.906495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.917194] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.917221] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.927672] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.927699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.938036] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.938064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.948201] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.948228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.958667] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.958694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.968888] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.968915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.978742] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.978769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.989207] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.989234] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:08.999674] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:08.999702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.009600] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.009626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.020660] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.020687] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.031162] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.031190] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.041689] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.041717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.052140] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.052168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.062270] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.062298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.072874] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.072901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.083623] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.083650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.093528] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.093556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.104701] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.104728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.116544] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.116571] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.125747] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.125774] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.138316] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.138343] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.147904] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.147942] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.159037] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.159067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.762 [2024-05-15 02:30:09.168944] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.762 [2024-05-15 02:30:09.168971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.179281] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.179308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.189208] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.189235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.200082] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.200110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.209926] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.209962] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.220661] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.220688] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.230906] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.230942] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.241898] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.241926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.251667] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.251696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.262862] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.262890] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.273044] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.273072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.283360] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.283388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.293896] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.293923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.303609] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.303636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.314667] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.314694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.324784] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.324811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.335340] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.335367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.348017] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.348045] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.357604] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.357632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.368053] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.368080] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.378125] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.378153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.020 [2024-05-15 02:30:09.389020] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.020 [2024-05-15 02:30:09.389048] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.021 [2024-05-15 02:30:09.398819] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.021 [2024-05-15 02:30:09.398847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.021 [2024-05-15 02:30:09.409681] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.021 [2024-05-15 02:30:09.409708] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.021 [2024-05-15 02:30:09.419636] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.021 [2024-05-15 02:30:09.419663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.021 [2024-05-15 02:30:09.430624] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.021 [2024-05-15 02:30:09.430652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.440429] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.440456] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.450816] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.450844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.461351] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.461378] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.471961] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.471997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.481770] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.481797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.492310] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.492337] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.278 [2024-05-15 02:30:09.502282] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.278 [2024-05-15 02:30:09.502309] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.513431] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.513457] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.523719] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.523746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.534191] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.534218] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.546657] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.546684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.556100] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.556127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.567086] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.567114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.576883] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.576911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.587614] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.587641] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.598000] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.598038] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.608578] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.608605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.619040] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.619067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.629764] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.629790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.639719] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.639747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.650609] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.650636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.662949] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.662976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.672568] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.672602] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.279 [2024-05-15 02:30:09.683317] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.279 [2024-05-15 02:30:09.683344] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.693646] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.693673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.703618] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.703644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.714356] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.714383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.724783] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.724810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.734468] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.734494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.745173] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.745200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.755578] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.755605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.766076] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.766103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.778186] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.778212] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.787989] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.788016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.799129] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.799156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.809116] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.809143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.820188] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.820225] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.832244] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.832281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.841924] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.841960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.852447] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.852475] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.862563] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.862591] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.872498] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.872531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.883492] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.883519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.893868] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.893896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.903819] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.903846] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.914665] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.914692] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.925130] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.925158] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.935842] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.935869] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.537 [2024-05-15 02:30:09.948592] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.537 [2024-05-15 02:30:09.948619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:09.958527] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:09.958566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:09.969079] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:09.969106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:09.979916] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:09.979961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:09.990606] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:09.990634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.000775] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.000802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.011536] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.011564] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.022048] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.022079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.032525] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.032554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.042585] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.042612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.053614] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.053642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.063042] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.063070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.074159] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.074194] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.083899] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.083926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.094886] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.094912] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.104826] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.104853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.115531] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.115558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.126087] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.795 [2024-05-15 02:30:10.126115] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.795 [2024-05-15 02:30:10.136608] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.136635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.796 [2024-05-15 02:30:10.149388] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.149414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.796 [2024-05-15 02:30:10.158955] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.158983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.796 [2024-05-15 02:30:10.169926] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.169961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.796 [2024-05-15 02:30:10.179919] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.179954] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.796 [2024-05-15 02:30:10.190795] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.190821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.796 [2024-05-15 02:30:10.201277] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.796 [2024-05-15 02:30:10.201305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.212065] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.212092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.222780] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.222806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.233419] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.233446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.243826] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.243852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.254598] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.254625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.265016] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.265049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.275564] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.275597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.287863] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.287890] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.297252] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.297279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.308002] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.308029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.318548] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.318575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.328765] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.328792] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.339045] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.339072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.349627] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.349654] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.360193] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.360220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.370587] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.370614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.380860] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.380888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.393784] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.393812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.403307] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.403335] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.414371] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.414399] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.424601] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.424628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.435288] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.435315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.445072] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.445099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.455750] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.455777] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.055 [2024-05-15 02:30:10.466170] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.055 [2024-05-15 02:30:10.466197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.476799] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.476833] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.487567] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.487595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.497901] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.497928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.510507] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.510534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.520258] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.520285] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.531258] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.531285] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.541017] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.541045] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.552106] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.552133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.562534] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.562561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.573045] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.573073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.583122] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.583148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.593812] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.593839] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.603651] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.603678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.614608] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.614636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.625260] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.625287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.635394] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.635421] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.645607] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.645634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.655940] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.655967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.666385] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.666412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.676379] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.676406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.686423] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.686449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.697220] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.697246] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.707540] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.707567] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.314 [2024-05-15 02:30:10.718098] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.314 [2024-05-15 02:30:10.718124] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.731262] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.731289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.739999] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.740026] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.753027] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.753054] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.762995] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.763022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.773710] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.773737] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.783843] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.783884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.794332] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.794359] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.806798] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.806825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.815947] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.815974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.827201] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.827228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.839343] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.839371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.848850] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.848877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.860418] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.860446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.870666] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.870694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.881549] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.881577] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.893581] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.893608] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.903209] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.903236] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.914441] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.914468] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.925478] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.925505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.935323] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.935350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.945741] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.945768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.958200] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.958227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.967584] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.967611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.573 [2024-05-15 02:30:10.978474] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.573 [2024-05-15 02:30:10.978501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.831 [2024-05-15 02:30:10.988908] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.831 [2024-05-15 02:30:10.988943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.831 [2024-05-15 02:30:10.999376] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.831 [2024-05-15 02:30:10.999404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.831 [2024-05-15 02:30:11.008866] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.008894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.021819] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.021847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.031407] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.031435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.041958] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.041986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.051884] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.051912] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.062928] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.062964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.073297] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.073324] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.084334] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.084362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.094244] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.094272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.105280] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.105308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.114834] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.114862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.126058] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.126086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.136132] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.136160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.147319] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.147347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.157144] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.157171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.168013] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.168040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.178033] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.178061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.189062] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.189089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.201206] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.201233] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.210462] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.210489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.221389] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.221416] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.232247] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.232274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.832 [2024-05-15 02:30:11.242786] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.832 [2024-05-15 02:30:11.242813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.252828] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.252855] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.263269] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.263295] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.273827] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.273861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.284635] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.284662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.294885] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.294913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.305318] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.305345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.315802] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.315829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.326325] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.326352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.336764] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.336791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.347365] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.347391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.357848] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.357875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.370395] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.370422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.380270] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.380297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.391151] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.391178] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.401675] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.401703] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.412055] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.412083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.422090] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.422118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.433008] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.433036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.443918] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.443952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.453784] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.453811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.465200] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.465226] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.475712] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.475747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.486361] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.486388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.091 [2024-05-15 02:30:11.498671] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.091 [2024-05-15 02:30:11.498698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.508033] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.508060] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.519142] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.519169] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.529840] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.529867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.542735] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.542761] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.552556] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.552582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.563781] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.563809] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.574279] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.574306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.585104] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.585131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.595883] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.595910] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.606174] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.606202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.617113] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.617141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.627978] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.628006] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.640175] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.640203] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.649770] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.649798] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.660587] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.660615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.671243] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.671269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.681417] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.681451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.692379] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.692406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.702392] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.702419] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.713192] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.713218] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.725450] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.725478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.735235] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.735262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.746528] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.746555] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.350 [2024-05-15 02:30:11.756567] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.350 [2024-05-15 02:30:11.756594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.767415] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.767443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.778183] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.778209] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.790524] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.790551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.800061] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.800088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.810640] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.810667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.823130] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.823157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.832331] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.832358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.843267] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.843309] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.855309] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.855336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.864022] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.864049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.875298] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.875325] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.884980] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.885015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.895836] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.895863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.905940] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.905967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.916606] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.916633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.928656] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.928684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.937816] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.937843] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.948895] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.948922] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.608 [2024-05-15 02:30:11.958947] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.608 [2024-05-15 02:30:11.958974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.609 [2024-05-15 02:30:11.970001] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.609 [2024-05-15 02:30:11.970028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.609 [2024-05-15 02:30:11.983938] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.609 [2024-05-15 02:30:11.983965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.609 [2024-05-15 02:30:11.993316] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.609 [2024-05-15 02:30:11.993342] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.609 [2024-05-15 02:30:12.004362] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.609 [2024-05-15 02:30:12.004389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.609 [2024-05-15 02:30:12.016177] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.609 [2024-05-15 02:30:12.016204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.025366] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.025393] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.036301] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.036333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.048754] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.048801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.061252] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.061296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.071739] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.071770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.081187] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.081214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.092025] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.092061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.101966] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.101994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.113260] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.113288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.123594] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.123620] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.134371] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.134399] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.144858] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.144886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.155506] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.155534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.168170] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.168197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.177609] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.177637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.188570] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.188624] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.198365] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.198392] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.209433] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.209460] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.219964] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.219998] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.229967] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.229995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.240399] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.240428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.250710] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.250737] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.261136] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.261164] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.271583] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.271611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.282274] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.282301] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.292514] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.292542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.303054] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.303081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.906 [2024-05-15 02:30:12.313249] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.906 [2024-05-15 02:30:12.313276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.324083] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.324110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.334066] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.334093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.345523] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.345551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.356288] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.356315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.365735] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.365762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.377058] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.377085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.387646] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.387673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.397923] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.397957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.408495] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.408523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.418720] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.418748] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.428777] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.428806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.440422] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.440450] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.451202] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.451229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.461938] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.461966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.472847] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.472874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.482539] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.482566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.494019] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.494046] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.503988] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.504016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.514804] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.514832] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.526652] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.526680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.164 [2024-05-15 02:30:12.536228] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.164 [2024-05-15 02:30:12.536256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.165 [2024-05-15 02:30:12.547123] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.165 [2024-05-15 02:30:12.547150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.165 [2024-05-15 02:30:12.557339] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.165 [2024-05-15 02:30:12.557366] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.165 [2024-05-15 02:30:12.567818] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.165 [2024-05-15 02:30:12.567846] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.165 [2024-05-15 02:30:12.578623] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.165 [2024-05-15 02:30:12.578650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.589175] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.589202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.599630] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.599657] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.610653] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.610680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.620937] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.620964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.631304] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.631331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.642246] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.642273] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.653091] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.653118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.663180] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.663206] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.674076] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.674103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.686636] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.686663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.695927] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.695963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.707324] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.707352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.718045] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.718072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.727622] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.727649] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.738500] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.738528] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.748708] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.748736] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 00:13:25.423 Latency(us) 00:13:25.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.423 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:25.423 Nvme1n1 : 5.01 12089.93 94.45 0.00 0.00 10570.93 3762.25 21068.61 00:13:25.423 =================================================================================================================== 00:13:25.423 Total : 12089.93 94.45 0.00 0.00 10570.93 3762.25 21068.61 00:13:25.423 [2024-05-15 02:30:12.754021] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.754047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.761892] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.761921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.769908] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.769948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.777981] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.778020] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.786029] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.786076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.794031] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.794080] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.802070] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.802119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.810083] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.810132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.818097] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.818146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.826125] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.826184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.423 [2024-05-15 02:30:12.834151] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.423 [2024-05-15 02:30:12.834200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.681 [2024-05-15 02:30:12.842172] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.842221] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.850188] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.850234] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.858210] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.858260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.866228] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.866284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.874252] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.874300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.882278] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.882327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.890258] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.890303] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.898259] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.898297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.906295] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.906321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.914314] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.914338] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.922327] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.922349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.930419] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.930469] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.938438] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.938487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.946441] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.946480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.954424] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.954449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.962447] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.962472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.970466] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.970491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.978476] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.978507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.986577] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.986628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:12.994598] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:12.994645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:13.002558] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:13.002583] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:13.010580] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:13.010605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 [2024-05-15 02:30:13.018599] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.682 [2024-05-15 02:30:13.018623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2291086) - No such process 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2291086 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.682 delay0 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.682 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:25.682 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.682 [2024-05-15 02:30:13.093114] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:32.236 Initializing NVMe Controllers 00:13:32.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:32.236 Initialization complete. Launching workers. 00:13:32.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 166 00:13:32.236 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 453, failed to submit 33 00:13:32.236 success 299, unsuccess 154, failed 0 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.236 rmmod nvme_tcp 00:13:32.236 rmmod nvme_fabrics 00:13:32.236 rmmod nvme_keyring 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2289631 ']' 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2289631 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 2289631 ']' 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 2289631 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2289631 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:32.236 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2289631' 00:13:32.237 killing process with pid 2289631 00:13:32.237 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 2289631 00:13:32.237 [2024-05-15 02:30:19.449998] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:32.237 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 2289631 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.496 02:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.398 02:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.398 00:13:34.398 real 0m29.076s 00:13:34.398 user 0m41.614s 00:13:34.398 sys 0m9.043s 00:13:34.398 02:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:34.398 02:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.398 ************************************ 00:13:34.398 END TEST nvmf_zcopy 00:13:34.398 ************************************ 00:13:34.656 02:30:21 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:34.656 02:30:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:34.656 02:30:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:34.656 02:30:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:34.656 ************************************ 00:13:34.656 START TEST nvmf_nmic 00:13:34.656 ************************************ 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:34.656 * Looking for test storage... 00:13:34.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.656 02:30:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.657 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:34.657 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:34.657 02:30:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.657 02:30:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:37.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:37.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.278 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:37.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:37.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:13:37.279 00:13:37.279 --- 10.0.0.2 ping statistics --- 00:13:37.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.279 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:13:37.279 00:13:37.279 --- 10.0.0.1 ping statistics --- 00:13:37.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.279 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2295263 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2295263 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 2295263 ']' 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:37.279 02:30:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.279 [2024-05-15 02:30:24.499824] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:13:37.279 [2024-05-15 02:30:24.499921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.279 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.279 [2024-05-15 02:30:24.577994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.279 [2024-05-15 02:30:24.692250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.279 [2024-05-15 02:30:24.692302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.279 [2024-05-15 02:30:24.692317] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.279 [2024-05-15 02:30:24.692329] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.279 [2024-05-15 02:30:24.692340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.279 [2024-05-15 02:30:24.692398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.537 [2024-05-15 02:30:24.692441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.537 [2024-05-15 02:30:24.692488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.537 [2024-05-15 02:30:24.692491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.101 [2024-05-15 02:30:25.475851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.101 Malloc0 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.101 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.357 [2024-05-15 02:30:25.527018] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:38.357 [2024-05-15 02:30:25.527333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:38.357 test case1: single bdev can't be used in multiple subsystems 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.357 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.357 [2024-05-15 02:30:25.551133] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:38.357 [2024-05-15 02:30:25.551163] subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:38.357 [2024-05-15 02:30:25.551179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.357 request: 00:13:38.357 { 00:13:38.357 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:38.357 "namespace": { 00:13:38.357 "bdev_name": "Malloc0", 00:13:38.358 "no_auto_visible": false 00:13:38.358 }, 00:13:38.358 "method": "nvmf_subsystem_add_ns", 00:13:38.358 "req_id": 1 00:13:38.358 } 00:13:38.358 Got JSON-RPC error response 00:13:38.358 response: 00:13:38.358 { 00:13:38.358 "code": -32602, 00:13:38.358 "message": "Invalid parameters" 00:13:38.358 } 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:38.358 Adding namespace failed - expected result. 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:38.358 test case2: host connect to nvmf target in multiple paths 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:38.358 [2024-05-15 02:30:25.559252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.358 02:30:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.922 02:30:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:39.487 02:30:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.487 02:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:13:39.487 02:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.487 02:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:39.487 02:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:13:41.384 02:30:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:41.384 [global] 00:13:41.384 thread=1 00:13:41.384 invalidate=1 00:13:41.384 rw=write 00:13:41.384 time_based=1 00:13:41.384 runtime=1 00:13:41.384 ioengine=libaio 00:13:41.384 direct=1 00:13:41.384 bs=4096 00:13:41.384 iodepth=1 00:13:41.384 norandommap=0 00:13:41.384 numjobs=1 00:13:41.384 00:13:41.641 verify_dump=1 00:13:41.641 verify_backlog=512 00:13:41.641 verify_state_save=0 00:13:41.641 do_verify=1 00:13:41.641 verify=crc32c-intel 00:13:41.641 [job0] 00:13:41.641 filename=/dev/nvme0n1 00:13:41.641 Could not set queue depth (nvme0n1) 00:13:41.641 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.641 fio-3.35 00:13:41.641 Starting 1 thread 00:13:43.014 00:13:43.014 job0: (groupid=0, jobs=1): err= 0: pid=2295906: Wed May 15 02:30:30 2024 00:13:43.014 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:13:43.014 slat (nsec): min=14780, max=44254, avg=18666.21, stdev=7119.56 00:13:43.014 clat (usec): min=541, max=42441, avg=37850.62, stdev=11476.00 00:13:43.015 lat (usec): min=557, max=42456, avg=37869.29, stdev=11472.47 00:13:43.015 clat percentiles (usec): 00:13:43.015 | 1.00th=[ 545], 5.00th=[ 701], 10.00th=[41157], 20.00th=[41157], 00:13:43.015 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:43.015 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.015 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.015 | 99.99th=[42206] 00:13:43.015 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:13:43.015 slat (nsec): min=7222, max=69470, avg=18447.75, stdev=7820.40 00:13:43.015 clat (usec): min=211, max=375, avg=234.77, stdev=20.51 00:13:43.015 lat (usec): min=226, max=444, avg=253.22, stdev=25.89 00:13:43.015 clat percentiles (usec): 00:13:43.015 | 1.00th=[ 217], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 223], 00:13:43.015 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 231], 00:13:43.015 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 289], 00:13:43.015 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 375], 99.95th=[ 375], 00:13:43.015 | 99.99th=[ 375] 00:13:43.015 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.015 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.015 lat (usec) : 250=85.07%, 500=10.45%, 750=0.37% 00:13:43.015 lat (msec) : 50=4.10% 00:13:43.015 cpu : usr=0.48%, sys=0.96%, ctx=536, majf=0, minf=2 00:13:43.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.015 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.015 00:13:43.015 Run status group 0 (all jobs): 00:13:43.015 READ: bw=92.2KiB/s (94.4kB/s), 92.2KiB/s-92.2KiB/s (94.4kB/s-94.4kB/s), io=96.0KiB (98.3kB), run=1041-1041msec 00:13:43.015 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:13:43.015 00:13:43.015 Disk stats (read/write): 00:13:43.015 nvme0n1: ios=68/512, merge=0/0, ticks=792/122, in_queue=914, util=92.79% 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.015 rmmod nvme_tcp 00:13:43.015 rmmod nvme_fabrics 00:13:43.015 rmmod nvme_keyring 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2295263 ']' 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2295263 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 2295263 ']' 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 2295263 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2295263 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2295263' 00:13:43.015 killing process with pid 2295263 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 2295263 00:13:43.015 [2024-05-15 02:30:30.381158] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:43.015 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 2295263 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.273 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.274 02:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.806 02:30:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.806 00:13:45.806 real 0m10.882s 00:13:45.806 user 0m24.851s 00:13:45.806 sys 0m2.658s 00:13:45.806 02:30:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.806 02:30:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.806 ************************************ 00:13:45.806 END TEST nvmf_nmic 00:13:45.806 ************************************ 00:13:45.806 02:30:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:45.807 02:30:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:45.807 02:30:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.807 02:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 ************************************ 00:13:45.807 START TEST nvmf_fio_target 00:13:45.807 ************************************ 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:45.807 * Looking for test storage... 00:13:45.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.807 02:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.337 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:48.338 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:48.338 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:48.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:48.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:13:48.338 00:13:48.338 --- 10.0.0.2 ping statistics --- 00:13:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.338 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:48.338 00:13:48.338 --- 10.0.0.1 ping statistics --- 00:13:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.338 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2298390 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2298390 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 2298390 ']' 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.338 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.338 [2024-05-15 02:30:35.535835] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:13:48.338 [2024-05-15 02:30:35.535919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.338 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.338 [2024-05-15 02:30:35.610995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.338 [2024-05-15 02:30:35.719516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.338 [2024-05-15 02:30:35.719561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.338 [2024-05-15 02:30:35.719589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.338 [2024-05-15 02:30:35.719600] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.338 [2024-05-15 02:30:35.719610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.338 [2024-05-15 02:30:35.719700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.339 [2024-05-15 02:30:35.719738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.339 [2024-05-15 02:30:35.719790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.339 [2024-05-15 02:30:35.719792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.596 02:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:48.853 [2024-05-15 02:30:36.082232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.853 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.112 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:49.112 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.370 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:49.370 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.628 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:49.628 02:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.886 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:49.886 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:50.171 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.429 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:50.429 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.687 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:50.687 02:30:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.945 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:50.945 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:51.202 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:51.461 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:51.461 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.718 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:51.718 02:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.976 02:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.234 [2024-05-15 02:30:39.435281] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:52.234 [2024-05-15 02:30:39.435551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.234 02:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:52.491 02:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:52.749 02:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.316 02:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:53.316 02:30:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:13:53.316 02:30:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.316 02:30:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:13:53.316 02:30:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:13:53.316 02:30:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:13:55.213 02:30:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:55.213 [global] 00:13:55.213 thread=1 00:13:55.213 invalidate=1 00:13:55.213 rw=write 00:13:55.213 time_based=1 00:13:55.213 runtime=1 00:13:55.213 ioengine=libaio 00:13:55.213 direct=1 00:13:55.213 bs=4096 00:13:55.213 iodepth=1 00:13:55.213 norandommap=0 00:13:55.213 numjobs=1 00:13:55.213 00:13:55.213 verify_dump=1 00:13:55.213 verify_backlog=512 00:13:55.213 verify_state_save=0 00:13:55.213 do_verify=1 00:13:55.213 verify=crc32c-intel 00:13:55.213 [job0] 00:13:55.213 filename=/dev/nvme0n1 00:13:55.213 [job1] 00:13:55.213 filename=/dev/nvme0n2 00:13:55.213 [job2] 00:13:55.213 filename=/dev/nvme0n3 00:13:55.213 [job3] 00:13:55.213 filename=/dev/nvme0n4 00:13:55.471 Could not set queue depth (nvme0n1) 00:13:55.471 Could not set queue depth (nvme0n2) 00:13:55.471 Could not set queue depth (nvme0n3) 00:13:55.471 Could not set queue depth (nvme0n4) 00:13:55.471 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.471 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.471 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.471 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.471 fio-3.35 00:13:55.471 Starting 4 threads 00:13:56.844 00:13:56.844 job0: (groupid=0, jobs=1): err= 0: pid=2299346: Wed May 15 02:30:44 2024 00:13:56.844 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:13:56.844 slat (nsec): min=6823, max=40475, avg=25703.32, stdev=10522.88 00:13:56.844 clat (usec): min=472, max=41523, avg=39137.83, stdev=8637.02 00:13:56.844 lat (usec): min=488, max=41532, avg=39163.53, stdev=8639.19 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 474], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:56.844 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.844 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.844 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:56.844 | 99.99th=[41681] 00:13:56.844 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:56.844 slat (nsec): min=6898, max=52248, avg=10789.15, stdev=6968.08 00:13:56.844 clat (usec): min=215, max=1032, avg=259.57, stdev=53.43 00:13:56.844 lat (usec): min=222, max=1055, avg=270.36, stdev=57.43 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:13:56.844 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:13:56.844 | 70.00th=[ 255], 80.00th=[ 281], 90.00th=[ 326], 95.00th=[ 351], 00:13:56.844 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 1037], 99.95th=[ 1037], 00:13:56.844 | 99.99th=[ 1037] 00:13:56.844 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.844 lat (usec) : 250=62.55%, 500=33.33% 00:13:56.844 lat (msec) : 2=0.19%, 50=3.93% 00:13:56.844 cpu : usr=0.30%, sys=0.80%, ctx=535, majf=0, minf=1 00:13:56.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.844 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.844 job1: (groupid=0, jobs=1): err= 0: pid=2299354: Wed May 15 02:30:44 2024 00:13:56.844 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:13:56.844 slat (nsec): min=7516, max=34655, avg=25030.70, stdev=9791.63 00:13:56.844 clat (usec): min=472, max=41578, avg=39212.74, stdev=8446.53 00:13:56.844 lat (usec): min=485, max=41585, avg=39237.77, stdev=8449.13 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 474], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:56.844 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.844 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.844 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:56.844 | 99.99th=[41681] 00:13:56.844 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:13:56.844 slat (nsec): min=7223, max=36595, avg=10387.39, stdev=5063.76 00:13:56.844 clat (usec): min=222, max=333, avg=250.39, stdev=14.30 00:13:56.844 lat (usec): min=230, max=369, avg=260.78, stdev=16.84 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:13:56.844 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:13:56.844 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:13:56.844 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 334], 99.95th=[ 334], 00:13:56.844 | 99.99th=[ 334] 00:13:56.844 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.844 lat (usec) : 250=52.90%, 500=42.99% 00:13:56.844 lat (msec) : 50=4.11% 00:13:56.844 cpu : usr=0.39%, sys=0.68%, ctx=535, majf=0, minf=2 00:13:56.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.844 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.844 job2: (groupid=0, jobs=1): err= 0: pid=2299381: Wed May 15 02:30:44 2024 00:13:56.844 read: IOPS=718, BW=2873KiB/s (2942kB/s)(2876KiB/1001msec) 00:13:56.844 slat (nsec): min=5646, max=38855, avg=15022.57, stdev=8622.39 00:13:56.844 clat (usec): min=347, max=41546, avg=785.66, stdev=3392.16 00:13:56.844 lat (usec): min=353, max=41561, avg=800.68, stdev=3392.49 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 379], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 445], 00:13:56.844 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 490], 60.00th=[ 523], 00:13:56.844 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 627], 00:13:56.844 | 99.00th=[ 742], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:56.844 | 99.99th=[41681] 00:13:56.844 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:56.844 slat (nsec): min=7416, max=66717, avg=22829.85, stdev=11677.74 00:13:56.844 clat (usec): min=216, max=940, avg=382.96, stdev=93.39 00:13:56.844 lat (usec): min=223, max=985, avg=405.79, stdev=92.33 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 245], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 297], 00:13:56.844 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 400], 00:13:56.844 | 70.00th=[ 429], 80.00th=[ 457], 90.00th=[ 502], 95.00th=[ 537], 00:13:56.844 | 99.00th=[ 693], 99.50th=[ 750], 99.90th=[ 848], 99.95th=[ 938], 00:13:56.844 | 99.99th=[ 938] 00:13:56.844 bw ( KiB/s): min= 4344, max= 4344, per=36.69%, avg=4344.00, stdev= 0.00, samples=1 00:13:56.844 iops : min= 1086, max= 1086, avg=1086.00, stdev= 0.00, samples=1 00:13:56.844 lat (usec) : 250=0.80%, 500=74.53%, 750=23.98%, 1000=0.40% 00:13:56.844 lat (msec) : 50=0.29% 00:13:56.844 cpu : usr=2.80%, sys=3.90%, ctx=1744, majf=0, minf=1 00:13:56.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.844 issued rwts: total=719,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.844 job3: (groupid=0, jobs=1): err= 0: pid=2299397: Wed May 15 02:30:44 2024 00:13:56.844 read: IOPS=624, BW=2498KiB/s (2557kB/s)(2500KiB/1001msec) 00:13:56.844 slat (nsec): min=6254, max=62083, avg=20564.58, stdev=10448.87 00:13:56.844 clat (usec): min=363, max=41582, avg=946.18, stdev=4295.11 00:13:56.844 lat (usec): min=379, max=41601, avg=966.75, stdev=4295.88 00:13:56.844 clat percentiles (usec): 00:13:56.844 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 433], 20.00th=[ 445], 00:13:56.845 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 482], 60.00th=[ 506], 00:13:56.845 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 594], 00:13:56.845 | 99.00th=[40633], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:56.845 | 99.99th=[41681] 00:13:56.845 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:56.845 slat (usec): min=6, max=3988, avg=23.78, stdev=124.55 00:13:56.845 clat (usec): min=214, max=1076, avg=354.40, stdev=83.02 00:13:56.845 lat (usec): min=222, max=4499, avg=378.18, stdev=153.46 00:13:56.845 clat percentiles (usec): 00:13:56.845 | 1.00th=[ 227], 5.00th=[ 241], 10.00th=[ 265], 20.00th=[ 285], 00:13:56.845 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 367], 00:13:56.845 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 445], 95.00th=[ 474], 00:13:56.845 | 99.00th=[ 570], 99.50th=[ 799], 99.90th=[ 930], 99.95th=[ 1074], 00:13:56.845 | 99.99th=[ 1074] 00:13:56.845 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.845 lat (usec) : 250=4.00%, 500=77.68%, 750=17.47%, 1000=0.36% 00:13:56.845 lat (msec) : 2=0.06%, 50=0.42% 00:13:56.845 cpu : usr=1.90%, sys=3.40%, ctx=1652, majf=0, minf=1 00:13:56.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.845 issued rwts: total=625,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.845 00:13:56.845 Run status group 0 (all jobs): 00:13:56.845 READ: bw=5353KiB/s (5481kB/s), 87.8KiB/s-2873KiB/s (89.9kB/s-2942kB/s), io=5556KiB (5689kB), run=1001-1038msec 00:13:56.845 WRITE: bw=11.6MiB/s (12.1MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1038msec 00:13:56.845 00:13:56.845 Disk stats (read/write): 00:13:56.845 nvme0n1: ios=67/512, merge=0/0, ticks=743/121, in_queue=864, util=87.07% 00:13:56.845 nvme0n2: ios=68/512, merge=0/0, ticks=777/126, in_queue=903, util=90.84% 00:13:56.845 nvme0n3: ios=706/1024, merge=0/0, ticks=595/381, in_queue=976, util=92.75% 00:13:56.845 nvme0n4: ios=572/762, merge=0/0, ticks=726/261, in_queue=987, util=94.38% 00:13:56.845 02:30:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:56.845 [global] 00:13:56.845 thread=1 00:13:56.845 invalidate=1 00:13:56.845 rw=randwrite 00:13:56.845 time_based=1 00:13:56.845 runtime=1 00:13:56.845 ioengine=libaio 00:13:56.845 direct=1 00:13:56.845 bs=4096 00:13:56.845 iodepth=1 00:13:56.845 norandommap=0 00:13:56.845 numjobs=1 00:13:56.845 00:13:56.845 verify_dump=1 00:13:56.845 verify_backlog=512 00:13:56.845 verify_state_save=0 00:13:56.845 do_verify=1 00:13:56.845 verify=crc32c-intel 00:13:56.845 [job0] 00:13:56.845 filename=/dev/nvme0n1 00:13:56.845 [job1] 00:13:56.845 filename=/dev/nvme0n2 00:13:56.845 [job2] 00:13:56.845 filename=/dev/nvme0n3 00:13:56.845 [job3] 00:13:56.845 filename=/dev/nvme0n4 00:13:56.845 Could not set queue depth (nvme0n1) 00:13:56.845 Could not set queue depth (nvme0n2) 00:13:56.845 Could not set queue depth (nvme0n3) 00:13:56.845 Could not set queue depth (nvme0n4) 00:13:57.103 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.103 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.103 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.103 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.103 fio-3.35 00:13:57.103 Starting 4 threads 00:13:58.476 00:13:58.476 job0: (groupid=0, jobs=1): err= 0: pid=2299692: Wed May 15 02:30:45 2024 00:13:58.476 read: IOPS=19, BW=79.3KiB/s (81.2kB/s)(80.0KiB/1009msec) 00:13:58.476 slat (nsec): min=8080, max=34287, avg=16304.35, stdev=6346.71 00:13:58.476 clat (usec): min=40944, max=41998, avg=41332.63, stdev=484.00 00:13:58.476 lat (usec): min=40962, max=42033, avg=41348.93, stdev=485.53 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:58.476 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:58.476 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:58.476 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:58.476 | 99.99th=[42206] 00:13:58.476 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:58.476 slat (nsec): min=7300, max=36460, avg=12293.65, stdev=5752.37 00:13:58.476 clat (usec): min=275, max=2610, avg=337.78, stdev=154.85 00:13:58.476 lat (usec): min=283, max=2620, avg=350.08, stdev=155.29 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:13:58.476 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:13:58.476 | 70.00th=[ 326], 80.00th=[ 355], 90.00th=[ 404], 95.00th=[ 490], 00:13:58.476 | 99.00th=[ 619], 99.50th=[ 627], 99.90th=[ 2606], 99.95th=[ 2606], 00:13:58.476 | 99.99th=[ 2606] 00:13:58.476 bw ( KiB/s): min= 4096, max= 4096, per=33.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:58.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:58.476 lat (usec) : 500=91.73%, 750=4.14% 00:13:58.476 lat (msec) : 4=0.38%, 50=3.76% 00:13:58.476 cpu : usr=0.20%, sys=0.99%, ctx=535, majf=0, minf=1 00:13:58.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.476 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.476 job1: (groupid=0, jobs=1): err= 0: pid=2299693: Wed May 15 02:30:45 2024 00:13:58.476 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:13:58.476 slat (nsec): min=8272, max=44293, avg=17908.76, stdev=8320.62 00:13:58.476 clat (usec): min=40823, max=41992, avg=41141.74, stdev=367.60 00:13:58.476 lat (usec): min=40858, max=42006, avg=41159.64, stdev=365.04 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:58.476 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:58.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:58.476 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:58.476 | 99.99th=[42206] 00:13:58.476 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:13:58.476 slat (nsec): min=6102, max=32920, avg=10304.15, stdev=4946.06 00:13:58.476 clat (usec): min=216, max=804, avg=273.45, stdev=59.12 00:13:58.476 lat (usec): min=223, max=822, avg=283.75, stdev=60.32 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:13:58.476 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:13:58.476 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 338], 95.00th=[ 383], 00:13:58.476 | 99.00th=[ 461], 99.50th=[ 685], 99.90th=[ 807], 99.95th=[ 807], 00:13:58.476 | 99.99th=[ 807] 00:13:58.476 bw ( KiB/s): min= 4096, max= 4096, per=33.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:58.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:58.476 lat (usec) : 250=39.59%, 500=55.53%, 750=0.75%, 1000=0.19% 00:13:58.476 lat (msec) : 50=3.94% 00:13:58.476 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=1 00:13:58.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.476 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.476 job2: (groupid=0, jobs=1): err= 0: pid=2299694: Wed May 15 02:30:45 2024 00:13:58.476 read: IOPS=1012, BW=4052KiB/s (4149kB/s)(4056KiB/1001msec) 00:13:58.476 slat (nsec): min=5256, max=60466, avg=13964.32, stdev=7498.75 00:13:58.476 clat (usec): min=361, max=41552, avg=602.39, stdev=2205.46 00:13:58.476 lat (usec): min=370, max=41569, avg=616.36, stdev=2205.43 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[ 375], 5.00th=[ 400], 10.00th=[ 412], 20.00th=[ 429], 00:13:58.476 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 474], 60.00th=[ 486], 00:13:58.476 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 603], 00:13:58.476 | 99.00th=[ 725], 99.50th=[ 783], 99.90th=[41157], 99.95th=[41681], 00:13:58.476 | 99.99th=[41681] 00:13:58.476 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:58.476 slat (nsec): min=6207, max=71987, avg=16724.17, stdev=9214.49 00:13:58.476 clat (usec): min=220, max=860, avg=340.62, stdev=92.22 00:13:58.476 lat (usec): min=226, max=877, avg=357.34, stdev=94.16 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 253], 00:13:58.476 | 30.00th=[ 273], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 355], 00:13:58.476 | 70.00th=[ 383], 80.00th=[ 420], 90.00th=[ 465], 95.00th=[ 515], 00:13:58.476 | 99.00th=[ 586], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 865], 00:13:58.476 | 99.99th=[ 865] 00:13:58.476 bw ( KiB/s): min= 5344, max= 5344, per=44.01%, avg=5344.00, stdev= 0.00, samples=1 00:13:58.476 iops : min= 1336, max= 1336, avg=1336.00, stdev= 0.00, samples=1 00:13:58.476 lat (usec) : 250=8.19%, 500=71.44%, 750=19.87%, 1000=0.34% 00:13:58.476 lat (msec) : 50=0.15% 00:13:58.476 cpu : usr=3.20%, sys=2.80%, ctx=2039, majf=0, minf=2 00:13:58.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.476 issued rwts: total=1014,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.476 job3: (groupid=0, jobs=1): err= 0: pid=2299695: Wed May 15 02:30:45 2024 00:13:58.476 read: IOPS=875, BW=3500KiB/s (3585kB/s)(3504KiB/1001msec) 00:13:58.476 slat (nsec): min=5195, max=70007, avg=18158.66, stdev=9318.77 00:13:58.476 clat (usec): min=344, max=41505, avg=750.95, stdev=3363.81 00:13:58.476 lat (usec): min=364, max=41537, avg=769.11, stdev=3364.47 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[ 367], 5.00th=[ 400], 10.00th=[ 412], 20.00th=[ 433], 00:13:58.476 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:13:58.476 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 562], 00:13:58.476 | 99.00th=[ 758], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:58.476 | 99.99th=[41681] 00:13:58.476 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:58.476 slat (nsec): min=5812, max=33820, avg=10578.45, stdev=4656.24 00:13:58.476 clat (usec): min=209, max=757, avg=300.60, stdev=96.84 00:13:58.476 lat (usec): min=216, max=773, avg=311.18, stdev=99.23 00:13:58.476 clat percentiles (usec): 00:13:58.476 | 1.00th=[ 212], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:13:58.476 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 289], 00:13:58.476 | 70.00th=[ 355], 80.00th=[ 392], 90.00th=[ 445], 95.00th=[ 478], 00:13:58.476 | 99.00th=[ 553], 99.50th=[ 635], 99.90th=[ 742], 99.95th=[ 758], 00:13:58.476 | 99.99th=[ 758] 00:13:58.476 bw ( KiB/s): min= 4096, max= 4096, per=33.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:58.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:58.476 lat (usec) : 250=28.11%, 500=57.37%, 750=14.00%, 1000=0.21% 00:13:58.477 lat (msec) : 50=0.32% 00:13:58.477 cpu : usr=1.10%, sys=3.10%, ctx=1900, majf=0, minf=1 00:13:58.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.477 issued rwts: total=876,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.477 00:13:58.477 Run status group 0 (all jobs): 00:13:58.477 READ: bw=7632KiB/s (7816kB/s), 79.3KiB/s-4052KiB/s (81.2kB/s-4149kB/s), io=7724KiB (7909kB), run=1001-1012msec 00:13:58.477 WRITE: bw=11.9MiB/s (12.4MB/s), 2024KiB/s-4092KiB/s (2072kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1012msec 00:13:58.477 00:13:58.477 Disk stats (read/write): 00:13:58.477 nvme0n1: ios=40/512, merge=0/0, ticks=1526/171, in_queue=1697, util=85.77% 00:13:58.477 nvme0n2: ios=44/512, merge=0/0, ticks=1609/134, in_queue=1743, util=89.85% 00:13:58.477 nvme0n3: ios=794/1024, merge=0/0, ticks=623/342, in_queue=965, util=95.42% 00:13:58.477 nvme0n4: ios=689/1024, merge=0/0, ticks=595/298, in_queue=893, util=96.12% 00:13:58.477 02:30:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:58.477 [global] 00:13:58.477 thread=1 00:13:58.477 invalidate=1 00:13:58.477 rw=write 00:13:58.477 time_based=1 00:13:58.477 runtime=1 00:13:58.477 ioengine=libaio 00:13:58.477 direct=1 00:13:58.477 bs=4096 00:13:58.477 iodepth=128 00:13:58.477 norandommap=0 00:13:58.477 numjobs=1 00:13:58.477 00:13:58.477 verify_dump=1 00:13:58.477 verify_backlog=512 00:13:58.477 verify_state_save=0 00:13:58.477 do_verify=1 00:13:58.477 verify=crc32c-intel 00:13:58.477 [job0] 00:13:58.477 filename=/dev/nvme0n1 00:13:58.477 [job1] 00:13:58.477 filename=/dev/nvme0n2 00:13:58.477 [job2] 00:13:58.477 filename=/dev/nvme0n3 00:13:58.477 [job3] 00:13:58.477 filename=/dev/nvme0n4 00:13:58.477 Could not set queue depth (nvme0n1) 00:13:58.477 Could not set queue depth (nvme0n2) 00:13:58.477 Could not set queue depth (nvme0n3) 00:13:58.477 Could not set queue depth (nvme0n4) 00:13:58.477 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.477 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.477 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.477 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.477 fio-3.35 00:13:58.477 Starting 4 threads 00:13:59.850 00:13:59.850 job0: (groupid=0, jobs=1): err= 0: pid=2299921: Wed May 15 02:30:46 2024 00:13:59.850 read: IOPS=3077, BW=12.0MiB/s (12.6MB/s)(12.2MiB/1016msec) 00:13:59.850 slat (usec): min=2, max=27505, avg=156.33, stdev=1260.48 00:13:59.850 clat (usec): min=5941, max=79974, avg=20625.02, stdev=14767.80 00:13:59.850 lat (usec): min=5944, max=79979, avg=20781.34, stdev=14833.79 00:13:59.850 clat percentiles (usec): 00:13:59.850 | 1.00th=[ 5997], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[11469], 00:13:59.850 | 30.00th=[11863], 40.00th=[12125], 50.00th=[14091], 60.00th=[18220], 00:13:59.850 | 70.00th=[21365], 80.00th=[26608], 90.00th=[39584], 95.00th=[59507], 00:13:59.850 | 99.00th=[74974], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:13:59.850 | 99.99th=[80217] 00:13:59.850 write: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec); 0 zone resets 00:13:59.850 slat (usec): min=3, max=40957, avg=137.55, stdev=1051.98 00:13:59.850 clat (usec): min=2062, max=57392, avg=15799.65, stdev=8978.22 00:13:59.850 lat (usec): min=2068, max=62405, avg=15937.20, stdev=9055.97 00:13:59.850 clat percentiles (usec): 00:13:59.850 | 1.00th=[ 3818], 5.00th=[ 7308], 10.00th=[ 8160], 20.00th=[ 9503], 00:13:59.850 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13042], 60.00th=[14877], 00:13:59.850 | 70.00th=[17171], 80.00th=[19792], 90.00th=[23725], 95.00th=[34341], 00:13:59.850 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:13:59.850 | 99.99th=[57410] 00:13:59.850 bw ( KiB/s): min=12584, max=15504, per=23.85%, avg=14044.00, stdev=2064.75, samples=2 00:13:59.850 iops : min= 3146, max= 3876, avg=3511.00, stdev=516.19, samples=2 00:13:59.850 lat (msec) : 4=0.54%, 10=14.36%, 20=59.16%, 50=21.46%, 100=4.49% 00:13:59.850 cpu : usr=2.17%, sys=4.63%, ctx=323, majf=0, minf=11 00:13:59.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:59.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.850 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.850 job1: (groupid=0, jobs=1): err= 0: pid=2299922: Wed May 15 02:30:46 2024 00:13:59.850 read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1018msec) 00:13:59.850 slat (usec): min=3, max=24578, avg=136.48, stdev=981.45 00:13:59.850 clat (usec): min=3790, max=54098, avg=17269.78, stdev=9569.61 00:13:59.850 lat (usec): min=3795, max=54117, avg=17406.26, stdev=9645.85 00:13:59.850 clat percentiles (usec): 00:13:59.850 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[10159], 00:13:59.850 | 30.00th=[10683], 40.00th=[11863], 50.00th=[13042], 60.00th=[15008], 00:13:59.850 | 70.00th=[19792], 80.00th=[27657], 90.00th=[32900], 95.00th=[35914], 00:13:59.850 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[47449], 00:13:59.850 | 99.99th=[54264] 00:13:59.850 write: IOPS=4021, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1018msec); 0 zone resets 00:13:59.850 slat (usec): min=4, max=15709, avg=116.35, stdev=598.06 00:13:59.850 clat (usec): min=1526, max=44813, avg=16429.13, stdev=7676.43 00:13:59.850 lat (usec): min=1538, max=44832, avg=16545.47, stdev=7719.87 00:13:59.850 clat percentiles (usec): 00:13:59.850 | 1.00th=[ 4621], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 9634], 00:13:59.850 | 30.00th=[10945], 40.00th=[11469], 50.00th=[15401], 60.00th=[17957], 00:13:59.850 | 70.00th=[20841], 80.00th=[22938], 90.00th=[27919], 95.00th=[30016], 00:13:59.850 | 99.00th=[39060], 99.50th=[40109], 99.90th=[43254], 99.95th=[44303], 00:13:59.850 | 99.99th=[44827] 00:13:59.850 bw ( KiB/s): min=15344, max=16416, per=26.96%, avg=15880.00, stdev=758.02, samples=2 00:13:59.850 iops : min= 3836, max= 4104, avg=3970.00, stdev=189.50, samples=2 00:13:59.850 lat (msec) : 2=0.03%, 4=0.43%, 10=20.76%, 20=48.40%, 50=30.37% 00:13:59.850 lat (msec) : 100=0.01% 00:13:59.850 cpu : usr=5.01%, sys=6.78%, ctx=465, majf=0, minf=13 00:13:59.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:59.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.851 issued rwts: total=3584,4094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.851 job2: (groupid=0, jobs=1): err= 0: pid=2299928: Wed May 15 02:30:46 2024 00:13:59.851 read: IOPS=3088, BW=12.1MiB/s (12.6MB/s)(12.3MiB/1019msec) 00:13:59.851 slat (usec): min=3, max=26502, avg=156.39, stdev=1165.63 00:13:59.851 clat (usec): min=7802, max=49762, avg=20726.55, stdev=8265.20 00:13:59.851 lat (usec): min=7820, max=56864, avg=20882.94, stdev=8330.75 00:13:59.851 clat percentiles (usec): 00:13:59.851 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11994], 20.00th=[13304], 00:13:59.851 | 30.00th=[15270], 40.00th=[16319], 50.00th=[18482], 60.00th=[21103], 00:13:59.851 | 70.00th=[23462], 80.00th=[27919], 90.00th=[34866], 95.00th=[36439], 00:13:59.851 | 99.00th=[42206], 99.50th=[43779], 99.90th=[46924], 99.95th=[47449], 00:13:59.851 | 99.99th=[49546] 00:13:59.851 write: IOPS=3517, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1019msec); 0 zone resets 00:13:59.851 slat (usec): min=5, max=16146, avg=133.09, stdev=758.58 00:13:59.851 clat (usec): min=1493, max=44847, avg=17867.96, stdev=7529.96 00:13:59.851 lat (usec): min=1518, max=44866, avg=18001.04, stdev=7568.59 00:13:59.851 clat percentiles (usec): 00:13:59.851 | 1.00th=[ 5342], 5.00th=[ 7963], 10.00th=[10028], 20.00th=[11338], 00:13:59.851 | 30.00th=[12911], 40.00th=[13829], 50.00th=[15401], 60.00th=[19530], 00:13:59.851 | 70.00th=[22676], 80.00th=[24511], 90.00th=[28181], 95.00th=[31065], 00:13:59.851 | 99.00th=[36963], 99.50th=[40109], 99.90th=[44827], 99.95th=[44827], 00:13:59.851 | 99.99th=[44827] 00:13:59.851 bw ( KiB/s): min=12288, max=15960, per=23.98%, avg=14124.00, stdev=2596.50, samples=2 00:13:59.851 iops : min= 3072, max= 3990, avg=3531.00, stdev=649.12, samples=2 00:13:59.851 lat (msec) : 2=0.03%, 4=0.03%, 10=5.50%, 20=54.54%, 50=39.90% 00:13:59.851 cpu : usr=4.81%, sys=5.89%, ctx=340, majf=0, minf=11 00:13:59.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:59.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.851 issued rwts: total=3147,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.851 job3: (groupid=0, jobs=1): err= 0: pid=2299930: Wed May 15 02:30:46 2024 00:13:59.851 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:13:59.851 slat (usec): min=2, max=24887, avg=134.07, stdev=991.90 00:13:59.851 clat (usec): min=8962, max=54105, avg=17701.15, stdev=7831.44 00:13:59.851 lat (usec): min=8965, max=54110, avg=17835.22, stdev=7899.01 00:13:59.851 clat percentiles (usec): 00:13:59.851 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[11731], 20.00th=[13042], 00:13:59.851 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[15664], 00:13:59.851 | 70.00th=[18482], 80.00th=[20055], 90.00th=[27919], 95.00th=[34866], 00:13:59.851 | 99.00th=[45876], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:13:59.851 | 99.99th=[54264] 00:13:59.851 write: IOPS=3719, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1006msec); 0 zone resets 00:13:59.851 slat (usec): min=3, max=41050, avg=128.70, stdev=1239.76 00:13:59.851 clat (usec): min=739, max=66015, avg=15083.69, stdev=8943.57 00:13:59.851 lat (usec): min=745, max=66023, avg=15212.38, stdev=9016.82 00:13:59.851 clat percentiles (usec): 00:13:59.851 | 1.00th=[ 4948], 5.00th=[ 6849], 10.00th=[ 8094], 20.00th=[ 9241], 00:13:59.851 | 30.00th=[10945], 40.00th=[12387], 50.00th=[13829], 60.00th=[14353], 00:13:59.851 | 70.00th=[16188], 80.00th=[17171], 90.00th=[20579], 95.00th=[34866], 00:13:59.851 | 99.00th=[64226], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:13:59.851 | 99.99th=[65799] 00:13:59.851 bw ( KiB/s): min=12288, max=16720, per=24.63%, avg=14504.00, stdev=3133.90, samples=2 00:13:59.851 iops : min= 3072, max= 4180, avg=3626.00, stdev=783.47, samples=2 00:13:59.851 lat (usec) : 750=0.03%, 1000=0.01% 00:13:59.851 lat (msec) : 2=0.04%, 4=0.23%, 10=13.12%, 20=70.52%, 50=14.76% 00:13:59.851 lat (msec) : 100=1.30% 00:13:59.851 cpu : usr=3.28%, sys=4.38%, ctx=229, majf=0, minf=15 00:13:59.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:59.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.851 issued rwts: total=3584,3742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.851 00:13:59.851 Run status group 0 (all jobs): 00:13:59.851 READ: bw=51.5MiB/s (54.0MB/s), 12.0MiB/s-13.9MiB/s (12.6MB/s-14.6MB/s), io=52.5MiB (55.1MB), run=1006-1019msec 00:13:59.851 WRITE: bw=57.5MiB/s (60.3MB/s), 13.7MiB/s-15.7MiB/s (14.4MB/s-16.5MB/s), io=58.6MiB (61.5MB), run=1006-1019msec 00:13:59.851 00:13:59.851 Disk stats (read/write): 00:13:59.851 nvme0n1: ios=2992/3072, merge=0/0, ticks=33169/32655, in_queue=65824, util=91.78% 00:13:59.851 nvme0n2: ios=3128/3279, merge=0/0, ticks=54732/49705, in_queue=104437, util=95.83% 00:13:59.851 nvme0n3: ios=2582/3071, merge=0/0, ticks=53872/51266, in_queue=105138, util=97.18% 00:13:59.851 nvme0n4: ios=2894/3072, merge=0/0, ticks=38342/34019, in_queue=72361, util=100.00% 00:13:59.851 02:30:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:59.851 [global] 00:13:59.851 thread=1 00:13:59.851 invalidate=1 00:13:59.851 rw=randwrite 00:13:59.851 time_based=1 00:13:59.851 runtime=1 00:13:59.851 ioengine=libaio 00:13:59.851 direct=1 00:13:59.851 bs=4096 00:13:59.851 iodepth=128 00:13:59.851 norandommap=0 00:13:59.851 numjobs=1 00:13:59.851 00:13:59.851 verify_dump=1 00:13:59.851 verify_backlog=512 00:13:59.851 verify_state_save=0 00:13:59.851 do_verify=1 00:13:59.851 verify=crc32c-intel 00:13:59.851 [job0] 00:13:59.851 filename=/dev/nvme0n1 00:13:59.851 [job1] 00:13:59.851 filename=/dev/nvme0n2 00:13:59.851 [job2] 00:13:59.851 filename=/dev/nvme0n3 00:13:59.851 [job3] 00:13:59.851 filename=/dev/nvme0n4 00:13:59.851 Could not set queue depth (nvme0n1) 00:13:59.851 Could not set queue depth (nvme0n2) 00:13:59.851 Could not set queue depth (nvme0n3) 00:13:59.851 Could not set queue depth (nvme0n4) 00:13:59.851 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.851 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.851 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.851 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.851 fio-3.35 00:13:59.851 Starting 4 threads 00:14:01.237 00:14:01.237 job0: (groupid=0, jobs=1): err= 0: pid=2300154: Wed May 15 02:30:48 2024 00:14:01.237 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:14:01.237 slat (usec): min=3, max=38232, avg=183.73, stdev=1625.12 00:14:01.237 clat (usec): min=8076, max=75795, avg=23488.70, stdev=15632.23 00:14:01.237 lat (usec): min=8093, max=75808, avg=23672.42, stdev=15693.57 00:14:01.237 clat percentiles (usec): 00:14:01.237 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11600], 20.00th=[12911], 00:14:01.237 | 30.00th=[14222], 40.00th=[15270], 50.00th=[17171], 60.00th=[20317], 00:14:01.237 | 70.00th=[23200], 80.00th=[26346], 90.00th=[52691], 95.00th=[56361], 00:14:01.237 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:14:01.237 | 99.99th=[76022] 00:14:01.237 write: IOPS=3303, BW=12.9MiB/s (13.5MB/s)(13.1MiB/1012msec); 0 zone resets 00:14:01.237 slat (usec): min=4, max=49107, avg=117.39, stdev=1188.91 00:14:01.237 clat (usec): min=3175, max=75791, avg=16807.13, stdev=10176.05 00:14:01.237 lat (usec): min=3186, max=75808, avg=16924.52, stdev=10205.24 00:14:01.237 clat percentiles (usec): 00:14:01.237 | 1.00th=[ 4817], 5.00th=[ 6456], 10.00th=[ 8225], 20.00th=[10552], 00:14:01.237 | 30.00th=[11731], 40.00th=[13042], 50.00th=[14353], 60.00th=[15926], 00:14:01.237 | 70.00th=[17433], 80.00th=[20579], 90.00th=[28443], 95.00th=[40109], 00:14:01.237 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56361], 99.95th=[76022], 00:14:01.237 | 99.99th=[76022] 00:14:01.237 bw ( KiB/s): min=11520, max=14208, per=22.49%, avg=12864.00, stdev=1900.70, samples=2 00:14:01.237 iops : min= 2880, max= 3552, avg=3216.00, stdev=475.18, samples=2 00:14:01.237 lat (msec) : 4=0.51%, 10=10.26%, 20=58.69%, 50=21.43%, 100=9.10% 00:14:01.237 cpu : usr=3.96%, sys=5.64%, ctx=258, majf=0, minf=13 00:14:01.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:01.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.237 issued rwts: total=3072,3343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.237 job1: (groupid=0, jobs=1): err= 0: pid=2300155: Wed May 15 02:30:48 2024 00:14:01.237 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:14:01.237 slat (usec): min=2, max=11485, avg=127.16, stdev=758.81 00:14:01.237 clat (usec): min=10047, max=30692, avg=15931.08, stdev=3037.53 00:14:01.237 lat (usec): min=10066, max=30708, avg=16058.24, stdev=3112.72 00:14:01.237 clat percentiles (usec): 00:14:01.237 | 1.00th=[10945], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:14:01.237 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15008], 60.00th=[15926], 00:14:01.237 | 70.00th=[17171], 80.00th=[18744], 90.00th=[20317], 95.00th=[21103], 00:14:01.237 | 99.00th=[24249], 99.50th=[25035], 99.90th=[27919], 99.95th=[28443], 00:14:01.237 | 99.99th=[30802] 00:14:01.237 write: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1008msec); 0 zone resets 00:14:01.237 slat (usec): min=4, max=72187, avg=167.88, stdev=1699.30 00:14:01.237 clat (usec): min=4881, max=94568, avg=22880.36, stdev=16926.86 00:14:01.237 lat (usec): min=8108, max=94577, avg=23048.24, stdev=16994.74 00:14:01.237 clat percentiles (usec): 00:14:01.237 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[11338], 20.00th=[12780], 00:14:01.238 | 30.00th=[15533], 40.00th=[17695], 50.00th=[19530], 60.00th=[21103], 00:14:01.238 | 70.00th=[22414], 80.00th=[23987], 90.00th=[28705], 95.00th=[71828], 00:14:01.238 | 99.00th=[90702], 99.50th=[93848], 99.90th=[93848], 99.95th=[94897], 00:14:01.238 | 99.99th=[94897] 00:14:01.238 bw ( KiB/s): min=13048, max=13440, per=23.15%, avg=13244.00, stdev=277.19, samples=2 00:14:01.238 iops : min= 3262, max= 3360, avg=3311.00, stdev=69.30, samples=2 00:14:01.238 lat (msec) : 10=2.53%, 20=67.44%, 50=26.13%, 100=3.90% 00:14:01.238 cpu : usr=4.37%, sys=7.05%, ctx=363, majf=0, minf=11 00:14:01.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:01.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.238 issued rwts: total=3072,3439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.238 job2: (groupid=0, jobs=1): err= 0: pid=2300156: Wed May 15 02:30:48 2024 00:14:01.238 read: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(11.1MiB/1047msec) 00:14:01.238 slat (usec): min=3, max=14482, avg=154.59, stdev=928.99 00:14:01.238 clat (usec): min=7534, max=60341, avg=20507.64, stdev=10079.42 00:14:01.238 lat (usec): min=7540, max=68140, avg=20662.23, stdev=10140.35 00:14:01.238 clat percentiles (usec): 00:14:01.238 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11207], 20.00th=[12256], 00:14:01.238 | 30.00th=[13173], 40.00th=[16909], 50.00th=[19006], 60.00th=[20317], 00:14:01.238 | 70.00th=[22938], 80.00th=[26084], 90.00th=[30802], 95.00th=[36963], 00:14:01.238 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60556], 99.95th=[60556], 00:14:01.238 | 99.99th=[60556] 00:14:01.238 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:14:01.238 slat (usec): min=4, max=7810, avg=173.10, stdev=712.02 00:14:01.238 clat (usec): min=6702, max=43697, avg=24053.29, stdev=8561.83 00:14:01.238 lat (usec): min=6726, max=43708, avg=24226.39, stdev=8623.50 00:14:01.238 clat percentiles (usec): 00:14:01.238 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[12387], 20.00th=[15664], 00:14:01.238 | 30.00th=[19530], 40.00th=[21365], 50.00th=[23200], 60.00th=[25035], 00:14:01.238 | 70.00th=[28443], 80.00th=[32637], 90.00th=[35914], 95.00th=[39060], 00:14:01.238 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:14:01.238 | 99.99th=[43779] 00:14:01.238 bw ( KiB/s): min=12288, max=12288, per=21.48%, avg=12288.00, stdev= 0.00, samples=2 00:14:01.238 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:14:01.238 lat (msec) : 10=2.46%, 20=41.88%, 50=54.24%, 100=1.42% 00:14:01.238 cpu : usr=4.40%, sys=5.64%, ctx=390, majf=0, minf=15 00:14:01.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:01.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.238 issued rwts: total=2854,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.238 job3: (groupid=0, jobs=1): err= 0: pid=2300157: Wed May 15 02:30:48 2024 00:14:01.238 read: IOPS=4697, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1008msec) 00:14:01.238 slat (usec): min=2, max=20265, avg=104.75, stdev=770.66 00:14:01.238 clat (usec): min=3765, max=43238, avg=13989.42, stdev=5269.27 00:14:01.238 lat (usec): min=4350, max=43253, avg=14094.18, stdev=5314.86 00:14:01.238 clat percentiles (usec): 00:14:01.238 | 1.00th=[ 7635], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10945], 00:14:01.238 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12387], 60.00th=[13173], 00:14:01.238 | 70.00th=[14615], 80.00th=[16188], 90.00th=[19268], 95.00th=[20841], 00:14:01.238 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:14:01.238 | 99.99th=[43254] 00:14:01.238 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:14:01.238 slat (usec): min=3, max=15840, avg=88.24, stdev=603.57 00:14:01.238 clat (usec): min=3533, max=41034, avg=11939.34, stdev=5283.21 00:14:01.238 lat (usec): min=4051, max=41053, avg=12027.58, stdev=5306.36 00:14:01.238 clat percentiles (usec): 00:14:01.238 | 1.00th=[ 4621], 5.00th=[ 6783], 10.00th=[ 7701], 20.00th=[ 8356], 00:14:01.238 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11338], 00:14:01.238 | 70.00th=[12256], 80.00th=[14091], 90.00th=[19530], 95.00th=[23987], 00:14:01.238 | 99.00th=[32637], 99.50th=[32637], 99.90th=[38011], 99.95th=[41157], 00:14:01.238 | 99.99th=[41157] 00:14:01.238 bw ( KiB/s): min=19488, max=21464, per=35.79%, avg=20476.00, stdev=1397.24, samples=2 00:14:01.238 iops : min= 4872, max= 5366, avg=5119.00, stdev=349.31, samples=2 00:14:01.238 lat (msec) : 4=0.03%, 10=26.46%, 20=65.50%, 50=8.01% 00:14:01.238 cpu : usr=6.55%, sys=8.24%, ctx=365, majf=0, minf=11 00:14:01.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:01.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.238 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.238 00:14:01.238 Run status group 0 (all jobs): 00:14:01.238 READ: bw=51.2MiB/s (53.7MB/s), 10.6MiB/s-18.3MiB/s (11.2MB/s-19.2MB/s), io=53.6MiB (56.2MB), run=1008-1047msec 00:14:01.238 WRITE: bw=55.9MiB/s (58.6MB/s), 11.5MiB/s-19.8MiB/s (12.0MB/s-20.8MB/s), io=58.5MiB (61.3MB), run=1008-1047msec 00:14:01.238 00:14:01.238 Disk stats (read/write): 00:14:01.238 nvme0n1: ios=2580/2983, merge=0/0, ticks=55772/47278, in_queue=103050, util=85.67% 00:14:01.238 nvme0n2: ios=2608/2791, merge=0/0, ticks=21822/23708, in_queue=45530, util=96.33% 00:14:01.238 nvme0n3: ios=2526/2560, merge=0/0, ticks=17159/19998, in_queue=37157, util=96.13% 00:14:01.238 nvme0n4: ios=3955/4096, merge=0/0, ticks=53861/43726, in_queue=97587, util=89.44% 00:14:01.238 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:01.238 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2300295 00:14:01.238 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:01.238 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:01.238 [global] 00:14:01.238 thread=1 00:14:01.238 invalidate=1 00:14:01.238 rw=read 00:14:01.238 time_based=1 00:14:01.238 runtime=10 00:14:01.238 ioengine=libaio 00:14:01.238 direct=1 00:14:01.238 bs=4096 00:14:01.238 iodepth=1 00:14:01.238 norandommap=1 00:14:01.238 numjobs=1 00:14:01.238 00:14:01.238 [job0] 00:14:01.238 filename=/dev/nvme0n1 00:14:01.238 [job1] 00:14:01.238 filename=/dev/nvme0n2 00:14:01.238 [job2] 00:14:01.238 filename=/dev/nvme0n3 00:14:01.238 [job3] 00:14:01.238 filename=/dev/nvme0n4 00:14:01.238 Could not set queue depth (nvme0n1) 00:14:01.238 Could not set queue depth (nvme0n2) 00:14:01.238 Could not set queue depth (nvme0n3) 00:14:01.238 Could not set queue depth (nvme0n4) 00:14:01.496 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:01.496 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:01.496 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:01.496 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:01.496 fio-3.35 00:14:01.496 Starting 4 threads 00:14:04.781 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:04.781 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10625024, buflen=4096 00:14:04.781 fio: pid=2300509, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:04.781 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:04.781 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=405504, buflen=4096 00:14:04.781 fio: pid=2300508, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:04.781 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.781 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:05.039 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3268608, buflen=4096 00:14:05.039 fio: pid=2300506, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:05.039 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.039 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:05.298 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=9650176, buflen=4096 00:14:05.298 fio: pid=2300507, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:05.298 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.298 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:05.298 00:14:05.298 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2300506: Wed May 15 02:30:52 2024 00:14:05.298 read: IOPS=231, BW=926KiB/s (948kB/s)(3192KiB/3448msec) 00:14:05.298 slat (usec): min=5, max=8880, avg=41.83, stdev=460.27 00:14:05.298 clat (usec): min=384, max=42420, avg=4274.98, stdev=11848.70 00:14:05.298 lat (usec): min=402, max=50013, avg=4316.85, stdev=11889.46 00:14:05.298 clat percentiles (usec): 00:14:05.298 | 1.00th=[ 433], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 474], 00:14:05.298 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 486], 60.00th=[ 490], 00:14:05.298 | 70.00th=[ 502], 80.00th=[ 519], 90.00th=[ 660], 95.00th=[41157], 00:14:05.298 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:05.298 | 99.99th=[42206] 00:14:05.298 bw ( KiB/s): min= 96, max= 624, per=2.93%, avg=184.00, stdev=215.56, samples=6 00:14:05.298 iops : min= 24, max= 156, avg=46.00, stdev=53.89, samples=6 00:14:05.298 lat (usec) : 500=68.96%, 750=21.53%, 1000=0.13% 00:14:05.298 lat (msec) : 50=9.26% 00:14:05.298 cpu : usr=0.06%, sys=0.49%, ctx=802, majf=0, minf=1 00:14:05.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 issued rwts: total=799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.298 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2300507: Wed May 15 02:30:52 2024 00:14:05.298 read: IOPS=633, BW=2534KiB/s (2595kB/s)(9424KiB/3719msec) 00:14:05.298 slat (usec): min=6, max=6847, avg=25.33, stdev=185.01 00:14:05.298 clat (usec): min=362, max=44927, avg=1548.13, stdev=6390.46 00:14:05.298 lat (usec): min=378, max=48020, avg=1573.46, stdev=6425.59 00:14:05.298 clat percentiles (usec): 00:14:05.298 | 1.00th=[ 412], 5.00th=[ 441], 10.00th=[ 461], 20.00th=[ 482], 00:14:05.298 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 510], 60.00th=[ 519], 00:14:05.298 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 611], 00:14:05.298 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:14:05.298 | 99.99th=[44827] 00:14:05.298 bw ( KiB/s): min= 96, max= 7440, per=42.71%, avg=2686.29, stdev=3393.12, samples=7 00:14:05.298 iops : min= 24, max= 1860, avg=671.57, stdev=848.28, samples=7 00:14:05.298 lat (usec) : 500=34.58%, 750=62.58%, 1000=0.08% 00:14:05.298 lat (msec) : 2=0.04%, 4=0.08%, 10=0.04%, 50=2.55% 00:14:05.298 cpu : usr=0.43%, sys=1.56%, ctx=2364, majf=0, minf=1 00:14:05.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 issued rwts: total=2357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.298 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2300508: Wed May 15 02:30:52 2024 00:14:05.298 read: IOPS=31, BW=124KiB/s (127kB/s)(396KiB/3192msec) 00:14:05.298 slat (usec): min=8, max=7781, avg=99.73, stdev=775.96 00:14:05.298 clat (usec): min=470, max=41515, avg=32127.64, stdev=16694.74 00:14:05.298 lat (usec): min=486, max=49026, avg=32228.24, stdev=16757.69 00:14:05.298 clat percentiles (usec): 00:14:05.298 | 1.00th=[ 469], 5.00th=[ 519], 10.00th=[ 545], 20.00th=[ 660], 00:14:05.298 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:05.298 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:05.298 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:05.298 | 99.99th=[41681] 00:14:05.298 bw ( KiB/s): min= 96, max= 248, per=1.99%, avg=125.33, stdev=60.43, samples=6 00:14:05.298 iops : min= 24, max= 62, avg=31.33, stdev=15.11, samples=6 00:14:05.298 lat (usec) : 500=2.00%, 750=19.00% 00:14:05.298 lat (msec) : 20=1.00%, 50=77.00% 00:14:05.298 cpu : usr=0.00%, sys=0.09%, ctx=101, majf=0, minf=1 00:14:05.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.298 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2300509: Wed May 15 02:30:52 2024 00:14:05.298 read: IOPS=890, BW=3561KiB/s (3646kB/s)(10.1MiB/2914msec) 00:14:05.298 slat (nsec): min=7755, max=50272, avg=16001.63, stdev=6066.24 00:14:05.298 clat (usec): min=338, max=45947, avg=1103.16, stdev=5183.18 00:14:05.298 lat (usec): min=346, max=45965, avg=1119.15, stdev=5184.47 00:14:05.298 clat percentiles (usec): 00:14:05.298 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 400], 00:14:05.298 | 30.00th=[ 412], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 445], 00:14:05.298 | 70.00th=[ 453], 80.00th=[ 469], 90.00th=[ 506], 95.00th=[ 537], 00:14:05.298 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:05.298 | 99.99th=[45876] 00:14:05.298 bw ( KiB/s): min= 96, max= 9016, per=65.74%, avg=4134.40, stdev=4436.93, samples=5 00:14:05.298 iops : min= 24, max= 2254, avg=1033.60, stdev=1109.23, samples=5 00:14:05.298 lat (usec) : 500=88.09%, 750=10.17%, 1000=0.04% 00:14:05.298 lat (msec) : 4=0.04%, 50=1.62% 00:14:05.298 cpu : usr=1.06%, sys=1.92%, ctx=2596, majf=0, minf=1 00:14:05.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.298 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.298 00:14:05.298 Run status group 0 (all jobs): 00:14:05.298 READ: bw=6289KiB/s (6440kB/s), 124KiB/s-3561KiB/s (127kB/s-3646kB/s), io=22.8MiB (23.9MB), run=2914-3719msec 00:14:05.298 00:14:05.298 Disk stats (read/write): 00:14:05.298 nvme0n1: ios=634/0, merge=0/0, ticks=3321/0, in_queue=3321, util=95.74% 00:14:05.298 nvme0n2: ios=2393/0, merge=0/0, ticks=4510/0, in_queue=4510, util=99.33% 00:14:05.298 nvme0n3: ios=96/0, merge=0/0, ticks=3058/0, in_queue=3058, util=96.54% 00:14:05.298 nvme0n4: ios=2639/0, merge=0/0, ticks=3005/0, in_queue=3005, util=99.19% 00:14:05.557 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.557 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:05.815 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.815 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:06.075 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.075 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:06.351 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.351 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2300295 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:06.610 nvmf hotplug test: fio failed as expected 00:14:06.610 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:06.868 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.869 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:06.869 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.869 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.869 rmmod nvme_tcp 00:14:07.128 rmmod nvme_fabrics 00:14:07.129 rmmod nvme_keyring 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2298390 ']' 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2298390 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 2298390 ']' 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 2298390 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2298390 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2298390' 00:14:07.129 killing process with pid 2298390 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 2298390 00:14:07.129 [2024-05-15 02:30:54.336336] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:07.129 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 2298390 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.389 02:30:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.296 02:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.296 00:14:09.296 real 0m23.888s 00:14:09.296 user 1m21.644s 00:14:09.296 sys 0m6.550s 00:14:09.296 02:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.296 02:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.296 ************************************ 00:14:09.296 END TEST nvmf_fio_target 00:14:09.296 ************************************ 00:14:09.296 02:30:56 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:09.296 02:30:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.296 02:30:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.296 02:30:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.554 ************************************ 00:14:09.554 START TEST nvmf_bdevio 00:14:09.554 ************************************ 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:09.554 * Looking for test storage... 00:14:09.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.554 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.555 02:30:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:12.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:12.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:12.092 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:12.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.092 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:14:12.093 00:14:12.093 --- 10.0.0.2 ping statistics --- 00:14:12.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.093 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:14:12.093 00:14:12.093 --- 10.0.0.1 ping statistics --- 00:14:12.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.093 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2303418 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2303418 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 2303418 ']' 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:12.093 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.093 [2024-05-15 02:30:59.333569] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:12.093 [2024-05-15 02:30:59.333658] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.093 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.093 [2024-05-15 02:30:59.415339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.352 [2024-05-15 02:30:59.541425] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.352 [2024-05-15 02:30:59.541489] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.352 [2024-05-15 02:30:59.541505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.352 [2024-05-15 02:30:59.541518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.352 [2024-05-15 02:30:59.541530] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.352 [2024-05-15 02:30:59.541623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:12.352 [2024-05-15 02:30:59.541659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:12.352 [2024-05-15 02:30:59.541711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:12.352 [2024-05-15 02:30:59.541714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 [2024-05-15 02:30:59.697768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 Malloc0 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 [2024-05-15 02:30:59.748574] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:12.352 [2024-05-15 02:30:59.748878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:12.352 { 00:14:12.352 "params": { 00:14:12.352 "name": "Nvme$subsystem", 00:14:12.352 "trtype": "$TEST_TRANSPORT", 00:14:12.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.352 "adrfam": "ipv4", 00:14:12.352 "trsvcid": "$NVMF_PORT", 00:14:12.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.352 "hdgst": ${hdgst:-false}, 00:14:12.352 "ddgst": ${ddgst:-false} 00:14:12.352 }, 00:14:12.352 "method": "bdev_nvme_attach_controller" 00:14:12.352 } 00:14:12.352 EOF 00:14:12.352 )") 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:12.352 02:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:12.352 "params": { 00:14:12.352 "name": "Nvme1", 00:14:12.352 "trtype": "tcp", 00:14:12.352 "traddr": "10.0.0.2", 00:14:12.353 "adrfam": "ipv4", 00:14:12.353 "trsvcid": "4420", 00:14:12.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.353 "hdgst": false, 00:14:12.353 "ddgst": false 00:14:12.353 }, 00:14:12.353 "method": "bdev_nvme_attach_controller" 00:14:12.353 }' 00:14:12.610 [2024-05-15 02:30:59.793628] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:12.610 [2024-05-15 02:30:59.793703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303446 ] 00:14:12.610 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.610 [2024-05-15 02:30:59.865399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.610 [2024-05-15 02:30:59.979646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.610 [2024-05-15 02:30:59.979695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.610 [2024-05-15 02:30:59.979698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.179 I/O targets: 00:14:13.179 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:13.179 00:14:13.179 00:14:13.180 CUnit - A unit testing framework for C - Version 2.1-3 00:14:13.180 http://cunit.sourceforge.net/ 00:14:13.180 00:14:13.180 00:14:13.180 Suite: bdevio tests on: Nvme1n1 00:14:13.180 Test: blockdev write read block ...passed 00:14:13.180 Test: blockdev write zeroes read block ...passed 00:14:13.180 Test: blockdev write zeroes read no split ...passed 00:14:13.180 Test: blockdev write zeroes read split ...passed 00:14:13.180 Test: blockdev write zeroes read split partial ...passed 00:14:13.180 Test: blockdev reset ...[2024-05-15 02:31:00.536899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:13.180 [2024-05-15 02:31:00.537017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18919f0 (9): Bad file descriptor 00:14:13.180 [2024-05-15 02:31:00.591531] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:13.180 passed 00:14:13.180 Test: blockdev write read 8 blocks ...passed 00:14:13.440 Test: blockdev write read size > 128k ...passed 00:14:13.440 Test: blockdev write read invalid size ...passed 00:14:13.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:13.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:13.440 Test: blockdev write read max offset ...passed 00:14:13.440 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:13.440 Test: blockdev writev readv 8 blocks ...passed 00:14:13.440 Test: blockdev writev readv 30 x 1block ...passed 00:14:13.440 Test: blockdev writev readv block ...passed 00:14:13.440 Test: blockdev writev readv size > 128k ...passed 00:14:13.440 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:13.440 Test: blockdev comparev and writev ...[2024-05-15 02:31:00.811865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.811905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.811951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.811982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.812430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.812462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.812496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.812523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.812959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.812994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.813029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.813054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.813491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.813517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:13.440 [2024-05-15 02:31:00.813552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.440 [2024-05-15 02:31:00.813577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:13.700 passed 00:14:13.700 Test: blockdev nvme passthru rw ...passed 00:14:13.700 Test: blockdev nvme passthru vendor specific ...[2024-05-15 02:31:00.896322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.700 [2024-05-15 02:31:00.896351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:13.700 [2024-05-15 02:31:00.896597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.700 [2024-05-15 02:31:00.896623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:13.700 [2024-05-15 02:31:00.896864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.700 [2024-05-15 02:31:00.896890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:13.700 [2024-05-15 02:31:00.897145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.700 [2024-05-15 02:31:00.897170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:13.700 passed 00:14:13.700 Test: blockdev nvme admin passthru ...passed 00:14:13.700 Test: blockdev copy ...passed 00:14:13.700 00:14:13.700 Run Summary: Type Total Ran Passed Failed Inactive 00:14:13.700 suites 1 1 n/a 0 0 00:14:13.700 tests 23 23 23 0 0 00:14:13.700 asserts 152 152 152 0 n/a 00:14:13.700 00:14:13.700 Elapsed time = 1.250 seconds 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.959 rmmod nvme_tcp 00:14:13.959 rmmod nvme_fabrics 00:14:13.959 rmmod nvme_keyring 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2303418 ']' 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2303418 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 2303418 ']' 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 2303418 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2303418 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2303418' 00:14:13.959 killing process with pid 2303418 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 2303418 00:14:13.959 [2024-05-15 02:31:01.252401] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:13.959 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 2303418 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.217 02:31:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:16.754 00:14:16.754 real 0m6.861s 00:14:16.754 user 0m11.328s 00:14:16.754 sys 0m2.346s 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:16.754 ************************************ 00:14:16.754 END TEST nvmf_bdevio 00:14:16.754 ************************************ 00:14:16.754 02:31:03 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:14:16.754 02:31:03 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:16.754 02:31:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:16.754 02:31:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.754 02:31:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:16.754 ************************************ 00:14:16.754 START TEST nvmf_bdevio_no_huge 00:14:16.754 ************************************ 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:16.754 * Looking for test storage... 00:14:16.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.754 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.755 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.755 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.755 02:31:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.287 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:19.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:19.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:19.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:19.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:14:19.288 00:14:19.288 --- 10.0.0.2 ping statistics --- 00:14:19.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.288 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:14:19.288 00:14:19.288 --- 10.0.0.1 ping statistics --- 00:14:19.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.288 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2305930 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2305930 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 2305930 ']' 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:19.288 02:31:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.288 [2024-05-15 02:31:06.315194] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:19.288 [2024-05-15 02:31:06.315282] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:19.288 [2024-05-15 02:31:06.401541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.288 [2024-05-15 02:31:06.524586] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.288 [2024-05-15 02:31:06.524647] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.288 [2024-05-15 02:31:06.524663] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.288 [2024-05-15 02:31:06.524675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.288 [2024-05-15 02:31:06.524686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.288 [2024-05-15 02:31:06.524778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.288 [2024-05-15 02:31:06.524811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.288 [2024-05-15 02:31:06.524863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:19.288 [2024-05-15 02:31:06.524866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 [2024-05-15 02:31:07.358131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 Malloc0 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 [2024-05-15 02:31:07.396015] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:20.223 [2024-05-15 02:31:07.396288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:20.223 { 00:14:20.223 "params": { 00:14:20.223 "name": "Nvme$subsystem", 00:14:20.223 "trtype": "$TEST_TRANSPORT", 00:14:20.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.223 "adrfam": "ipv4", 00:14:20.223 "trsvcid": "$NVMF_PORT", 00:14:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.223 "hdgst": ${hdgst:-false}, 00:14:20.223 "ddgst": ${ddgst:-false} 00:14:20.223 }, 00:14:20.223 "method": "bdev_nvme_attach_controller" 00:14:20.223 } 00:14:20.223 EOF 00:14:20.223 )") 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:20.223 02:31:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:20.223 "params": { 00:14:20.223 "name": "Nvme1", 00:14:20.223 "trtype": "tcp", 00:14:20.223 "traddr": "10.0.0.2", 00:14:20.223 "adrfam": "ipv4", 00:14:20.223 "trsvcid": "4420", 00:14:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.223 "hdgst": false, 00:14:20.223 "ddgst": false 00:14:20.223 }, 00:14:20.223 "method": "bdev_nvme_attach_controller" 00:14:20.223 }' 00:14:20.223 [2024-05-15 02:31:07.440027] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:20.223 [2024-05-15 02:31:07.440107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2306086 ] 00:14:20.223 [2024-05-15 02:31:07.514090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.223 [2024-05-15 02:31:07.626399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.223 [2024-05-15 02:31:07.626447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.223 [2024-05-15 02:31:07.626450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.790 I/O targets: 00:14:20.790 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:20.790 00:14:20.790 00:14:20.790 CUnit - A unit testing framework for C - Version 2.1-3 00:14:20.790 http://cunit.sourceforge.net/ 00:14:20.790 00:14:20.790 00:14:20.790 Suite: bdevio tests on: Nvme1n1 00:14:20.790 Test: blockdev write read block ...passed 00:14:20.790 Test: blockdev write zeroes read block ...passed 00:14:20.790 Test: blockdev write zeroes read no split ...passed 00:14:20.790 Test: blockdev write zeroes read split ...passed 00:14:20.790 Test: blockdev write zeroes read split partial ...passed 00:14:20.790 Test: blockdev reset ...[2024-05-15 02:31:08.161515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:20.790 [2024-05-15 02:31:08.161622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147b340 (9): Bad file descriptor 00:14:21.051 [2024-05-15 02:31:08.271403] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:21.051 passed 00:14:21.051 Test: blockdev write read 8 blocks ...passed 00:14:21.051 Test: blockdev write read size > 128k ...passed 00:14:21.051 Test: blockdev write read invalid size ...passed 00:14:21.051 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:21.051 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:21.051 Test: blockdev write read max offset ...passed 00:14:21.051 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:21.051 Test: blockdev writev readv 8 blocks ...passed 00:14:21.311 Test: blockdev writev readv 30 x 1block ...passed 00:14:21.311 Test: blockdev writev readv block ...passed 00:14:21.311 Test: blockdev writev readv size > 128k ...passed 00:14:21.311 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:21.311 Test: blockdev comparev and writev ...[2024-05-15 02:31:08.531338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.311 [2024-05-15 02:31:08.531378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:21.311 [2024-05-15 02:31:08.531418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.311 [2024-05-15 02:31:08.531447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:21.311 [2024-05-15 02:31:08.531896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.311 [2024-05-15 02:31:08.531937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.531976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.312 [2024-05-15 02:31:08.532003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.532439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.312 [2024-05-15 02:31:08.532466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.532501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.312 [2024-05-15 02:31:08.532527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.532958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.312 [2024-05-15 02:31:08.532985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.533021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.312 [2024-05-15 02:31:08.533047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:21.312 passed 00:14:21.312 Test: blockdev nvme passthru rw ...passed 00:14:21.312 Test: blockdev nvme passthru vendor specific ...[2024-05-15 02:31:08.617344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.312 [2024-05-15 02:31:08.617373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.617616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.312 [2024-05-15 02:31:08.617643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.617885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.312 [2024-05-15 02:31:08.617911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:21.312 [2024-05-15 02:31:08.618155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.312 [2024-05-15 02:31:08.618182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:21.312 passed 00:14:21.312 Test: blockdev nvme admin passthru ...passed 00:14:21.312 Test: blockdev copy ...passed 00:14:21.312 00:14:21.312 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.312 suites 1 1 n/a 0 0 00:14:21.312 tests 23 23 23 0 0 00:14:21.312 asserts 152 152 152 0 n/a 00:14:21.312 00:14:21.312 Elapsed time = 1.430 seconds 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.881 rmmod nvme_tcp 00:14:21.881 rmmod nvme_fabrics 00:14:21.881 rmmod nvme_keyring 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2305930 ']' 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2305930 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 2305930 ']' 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 2305930 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2305930 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2305930' 00:14:21.881 killing process with pid 2305930 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 2305930 00:14:21.881 [2024-05-15 02:31:09.131192] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:21.881 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 2305930 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.452 02:31:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.359 02:31:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.359 00:14:24.359 real 0m7.979s 00:14:24.360 user 0m15.507s 00:14:24.360 sys 0m2.936s 00:14:24.360 02:31:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:24.360 02:31:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:24.360 ************************************ 00:14:24.360 END TEST nvmf_bdevio_no_huge 00:14:24.360 ************************************ 00:14:24.360 02:31:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:24.360 02:31:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:24.360 02:31:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.360 02:31:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.360 ************************************ 00:14:24.360 START TEST nvmf_tls 00:14:24.360 ************************************ 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:24.360 * Looking for test storage... 00:14:24.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.360 02:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:26.980 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:26.980 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:26.980 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:26.980 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.980 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:26.981 00:14:26.981 --- 10.0.0.2 ping statistics --- 00:14:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.981 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:14:26.981 00:14:26.981 --- 10.0.0.1 ping statistics --- 00:14:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.981 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2308573 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2308573 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2308573 ']' 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:26.981 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.981 [2024-05-15 02:31:14.249639] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:26.981 [2024-05-15 02:31:14.249717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.981 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.981 [2024-05-15 02:31:14.336152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.239 [2024-05-15 02:31:14.457579] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.239 [2024-05-15 02:31:14.457644] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.239 [2024-05-15 02:31:14.457659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.239 [2024-05-15 02:31:14.457672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.239 [2024-05-15 02:31:14.457684] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.239 [2024-05-15 02:31:14.457713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:27.239 02:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:27.496 true 00:14:27.496 02:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.496 02:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:27.755 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:27.755 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:27.755 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:28.014 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:28.014 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:28.272 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:28.272 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:28.272 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:28.531 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:28.531 02:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:28.791 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:28.791 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:28.791 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:28.791 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:29.049 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:29.049 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:29.049 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:29.306 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:29.306 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:29.564 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:29.564 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:29.564 02:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:29.821 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:29.821 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.WV7Cspjzv4 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RrxLDoggTs 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.WV7Cspjzv4 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RrxLDoggTs 00:14:30.081 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:30.340 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:14:30.599 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.WV7Cspjzv4 00:14:30.599 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WV7Cspjzv4 00:14:30.599 02:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:30.857 [2024-05-15 02:31:18.234705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.857 02:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:31.116 02:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:31.376 [2024-05-15 02:31:18.760077] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:31.376 [2024-05-15 02:31:18.760208] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.376 [2024-05-15 02:31:18.760444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.376 02:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:31.636 malloc0 00:14:31.636 02:31:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:31.896 02:31:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WV7Cspjzv4 00:14:32.154 [2024-05-15 02:31:19.510665] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:32.154 02:31:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WV7Cspjzv4 00:14:32.154 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.363 Initializing NVMe Controllers 00:14:44.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.363 Initialization complete. Launching workers. 00:14:44.363 ======================================================== 00:14:44.363 Latency(us) 00:14:44.363 Device Information : IOPS MiB/s Average min max 00:14:44.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7602.37 29.70 8421.24 1216.21 9232.28 00:14:44.363 ======================================================== 00:14:44.363 Total : 7602.37 29.70 8421.24 1216.21 9232.28 00:14:44.363 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WV7Cspjzv4 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WV7Cspjzv4' 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2310461 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2310461 /var/tmp/bdevperf.sock 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2310461 ']' 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.363 [2024-05-15 02:31:29.677500] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:44.363 [2024-05-15 02:31:29.677582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310461 ] 00:14:44.363 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.363 [2024-05-15 02:31:29.745650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.363 [2024-05-15 02:31:29.851826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:44.363 02:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WV7Cspjzv4 00:14:44.363 [2024-05-15 02:31:30.193095] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.363 [2024-05-15 02:31:30.193242] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:44.363 TLSTESTn1 00:14:44.363 02:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:44.363 Running I/O for 10 seconds... 00:14:54.350 00:14:54.350 Latency(us) 00:14:54.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.350 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:54.350 Verification LBA range: start 0x0 length 0x2000 00:14:54.350 TLSTESTn1 : 10.09 1202.69 4.70 0.00 0.00 106044.79 10243.03 136703.24 00:14:54.350 =================================================================================================================== 00:14:54.350 Total : 1202.69 4.70 0.00 0.00 106044.79 10243.03 136703.24 00:14:54.350 0 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2310461 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2310461 ']' 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2310461 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2310461 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2310461' 00:14:54.350 killing process with pid 2310461 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2310461 00:14:54.350 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.350 00:14:54.350 Latency(us) 00:14:54.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.350 =================================================================================================================== 00:14:54.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.350 [2024-05-15 02:31:40.552547] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2310461 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RrxLDoggTs 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RrxLDoggTs 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RrxLDoggTs 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.350 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RrxLDoggTs' 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2311718 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2311718 /var/tmp/bdevperf.sock 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2311718 ']' 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.351 02:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.351 [2024-05-15 02:31:40.862766] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:54.351 [2024-05-15 02:31:40.862854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311718 ] 00:14:54.351 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.351 [2024-05-15 02:31:40.930224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.351 [2024-05-15 02:31:41.034795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RrxLDoggTs 00:14:54.351 [2024-05-15 02:31:41.419786] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.351 [2024-05-15 02:31:41.419944] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:54.351 [2024-05-15 02:31:41.426746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:54.351 [2024-05-15 02:31:41.427229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2063130 (107): Transport endpoint is not connected 00:14:54.351 [2024-05-15 02:31:41.428209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2063130 (9): Bad file descriptor 00:14:54.351 [2024-05-15 02:31:41.429209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:54.351 [2024-05-15 02:31:41.429249] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:54.351 [2024-05-15 02:31:41.429277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:54.351 request: 00:14:54.351 { 00:14:54.351 "name": "TLSTEST", 00:14:54.351 "trtype": "tcp", 00:14:54.351 "traddr": "10.0.0.2", 00:14:54.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.351 "adrfam": "ipv4", 00:14:54.351 "trsvcid": "4420", 00:14:54.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.351 "psk": "/tmp/tmp.RrxLDoggTs", 00:14:54.351 "method": "bdev_nvme_attach_controller", 00:14:54.351 "req_id": 1 00:14:54.351 } 00:14:54.351 Got JSON-RPC error response 00:14:54.351 response: 00:14:54.351 { 00:14:54.351 "code": -32602, 00:14:54.351 "message": "Invalid parameters" 00:14:54.351 } 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2311718 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2311718 ']' 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2311718 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2311718 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2311718' 00:14:54.351 killing process with pid 2311718 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2311718 00:14:54.351 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.351 00:14:54.351 Latency(us) 00:14:54.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.351 =================================================================================================================== 00:14:54.351 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.351 [2024-05-15 02:31:41.479104] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2311718 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WV7Cspjzv4 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WV7Cspjzv4 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WV7Cspjzv4 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WV7Cspjzv4' 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2311807 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2311807 /var/tmp/bdevperf.sock 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2311807 ']' 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.351 02:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.611 [2024-05-15 02:31:41.779726] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:54.611 [2024-05-15 02:31:41.779805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311807 ] 00:14:54.611 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.611 [2024-05-15 02:31:41.846390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.611 [2024-05-15 02:31:41.954759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.870 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.870 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:54.870 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.WV7Cspjzv4 00:14:55.132 [2024-05-15 02:31:42.337331] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.132 [2024-05-15 02:31:42.337471] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:55.132 [2024-05-15 02:31:42.348997] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:55.132 [2024-05-15 02:31:42.349027] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:55.132 [2024-05-15 02:31:42.349083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:55.132 [2024-05-15 02:31:42.349433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc95130 (107): Transport endpoint is not connected 00:14:55.132 [2024-05-15 02:31:42.350417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc95130 (9): Bad file descriptor 00:14:55.132 [2024-05-15 02:31:42.351418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:55.132 [2024-05-15 02:31:42.351442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:55.132 [2024-05-15 02:31:42.351471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:55.132 request: 00:14:55.132 { 00:14:55.132 "name": "TLSTEST", 00:14:55.132 "trtype": "tcp", 00:14:55.132 "traddr": "10.0.0.2", 00:14:55.132 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:55.132 "adrfam": "ipv4", 00:14:55.132 "trsvcid": "4420", 00:14:55.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.132 "psk": "/tmp/tmp.WV7Cspjzv4", 00:14:55.132 "method": "bdev_nvme_attach_controller", 00:14:55.132 "req_id": 1 00:14:55.132 } 00:14:55.132 Got JSON-RPC error response 00:14:55.132 response: 00:14:55.132 { 00:14:55.132 "code": -32602, 00:14:55.132 "message": "Invalid parameters" 00:14:55.132 } 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2311807 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2311807 ']' 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2311807 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2311807 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2311807' 00:14:55.132 killing process with pid 2311807 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2311807 00:14:55.132 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.132 00:14:55.132 Latency(us) 00:14:55.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.132 =================================================================================================================== 00:14:55.132 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.132 [2024-05-15 02:31:42.401441] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:55.132 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2311807 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WV7Cspjzv4 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WV7Cspjzv4 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WV7Cspjzv4 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WV7Cspjzv4' 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2311942 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2311942 /var/tmp/bdevperf.sock 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2311942 ']' 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.441 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.441 [2024-05-15 02:31:42.706062] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:55.441 [2024-05-15 02:31:42.706153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311942 ] 00:14:55.441 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.441 [2024-05-15 02:31:42.780721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.700 [2024-05-15 02:31:42.892654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.700 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.700 02:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:55.700 02:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WV7Cspjzv4 00:14:55.961 [2024-05-15 02:31:43.227414] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.961 [2024-05-15 02:31:43.227539] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:55.961 [2024-05-15 02:31:43.232861] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:55.961 [2024-05-15 02:31:43.232892] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:55.961 [2024-05-15 02:31:43.232965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:55.961 [2024-05-15 02:31:43.233461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227130 (107): Transport endpoint is not connected 00:14:55.961 [2024-05-15 02:31:43.234443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227130 (9): Bad file descriptor 00:14:55.961 [2024-05-15 02:31:43.235441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:55.961 [2024-05-15 02:31:43.235464] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:55.961 [2024-05-15 02:31:43.235491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:55.961 request: 00:14:55.961 { 00:14:55.961 "name": "TLSTEST", 00:14:55.961 "trtype": "tcp", 00:14:55.961 "traddr": "10.0.0.2", 00:14:55.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.961 "adrfam": "ipv4", 00:14:55.961 "trsvcid": "4420", 00:14:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:55.961 "psk": "/tmp/tmp.WV7Cspjzv4", 00:14:55.961 "method": "bdev_nvme_attach_controller", 00:14:55.961 "req_id": 1 00:14:55.961 } 00:14:55.961 Got JSON-RPC error response 00:14:55.961 response: 00:14:55.961 { 00:14:55.961 "code": -32602, 00:14:55.961 "message": "Invalid parameters" 00:14:55.961 } 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2311942 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2311942 ']' 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2311942 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2311942 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2311942' 00:14:55.961 killing process with pid 2311942 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2311942 00:14:55.961 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.961 00:14:55.961 Latency(us) 00:14:55.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.961 =================================================================================================================== 00:14:55.961 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.961 [2024-05-15 02:31:43.288669] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:55.961 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2311942 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2312082 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2312082 /var/tmp/bdevperf.sock 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2312082 ']' 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.222 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.222 [2024-05-15 02:31:43.592388] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:56.222 [2024-05-15 02:31:43.592472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312082 ] 00:14:56.222 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.480 [2024-05-15 02:31:43.660148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.480 [2024-05-15 02:31:43.761715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.480 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:56.480 02:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:56.480 02:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:56.738 [2024-05-15 02:31:44.093536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:56.738 [2024-05-15 02:31:44.095619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73dab0 (9): Bad file descriptor 00:14:56.738 [2024-05-15 02:31:44.096615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:56.738 [2024-05-15 02:31:44.096639] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:56.738 [2024-05-15 02:31:44.096673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:56.738 request: 00:14:56.738 { 00:14:56.738 "name": "TLSTEST", 00:14:56.738 "trtype": "tcp", 00:14:56.738 "traddr": "10.0.0.2", 00:14:56.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.738 "adrfam": "ipv4", 00:14:56.738 "trsvcid": "4420", 00:14:56.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.738 "method": "bdev_nvme_attach_controller", 00:14:56.738 "req_id": 1 00:14:56.738 } 00:14:56.738 Got JSON-RPC error response 00:14:56.738 response: 00:14:56.738 { 00:14:56.738 "code": -32602, 00:14:56.738 "message": "Invalid parameters" 00:14:56.738 } 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2312082 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2312082 ']' 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2312082 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2312082 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2312082' 00:14:56.738 killing process with pid 2312082 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2312082 00:14:56.738 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.738 00:14:56.738 Latency(us) 00:14:56.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.738 =================================================================================================================== 00:14:56.738 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.738 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2312082 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2308573 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2308573 ']' 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2308573 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:56.996 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2308573 00:14:56.997 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:56.997 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:56.997 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2308573' 00:14:56.997 killing process with pid 2308573 00:14:56.997 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2308573 00:14:56.997 [2024-05-15 02:31:44.409190] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:56.997 [2024-05-15 02:31:44.409254] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:56.997 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2308573 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:57.564 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.L7UzFZjU1y 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.L7UzFZjU1y 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2312232 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2312232 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2312232 ']' 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:57.565 02:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.565 [2024-05-15 02:31:44.776559] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:14:57.565 [2024-05-15 02:31:44.776655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.565 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.565 [2024-05-15 02:31:44.857209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.565 [2024-05-15 02:31:44.970584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.565 [2024-05-15 02:31:44.970655] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.565 [2024-05-15 02:31:44.970671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.565 [2024-05-15 02:31:44.970684] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.565 [2024-05-15 02:31:44.970695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.565 [2024-05-15 02:31:44.970734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.L7UzFZjU1y 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L7UzFZjU1y 00:14:58.503 02:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.761 [2024-05-15 02:31:45.996296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.761 02:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:59.019 02:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:59.277 [2024-05-15 02:31:46.501594] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:59.277 [2024-05-15 02:31:46.501695] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:59.277 [2024-05-15 02:31:46.501943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.277 02:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:59.536 malloc0 00:14:59.536 02:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.794 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:00.052 [2024-05-15 02:31:47.347820] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:00.052 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L7UzFZjU1y 00:15:00.052 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.L7UzFZjU1y' 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2312527 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2312527 /var/tmp/bdevperf.sock 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2312527 ']' 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:00.053 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.053 [2024-05-15 02:31:47.413201] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:00.053 [2024-05-15 02:31:47.413285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312527 ] 00:15:00.053 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.311 [2024-05-15 02:31:47.482479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.311 [2024-05-15 02:31:47.594438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.311 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:00.311 02:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:00.311 02:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:00.571 [2024-05-15 02:31:47.979279] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.571 [2024-05-15 02:31:47.979434] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:00.831 TLSTESTn1 00:15:00.831 02:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:00.831 Running I/O for 10 seconds... 00:15:13.053 00:15:13.053 Latency(us) 00:15:13.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.053 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:13.053 Verification LBA range: start 0x0 length 0x2000 00:15:13.053 TLSTESTn1 : 10.06 998.55 3.90 0.00 0.00 127901.10 5922.51 146800.64 00:15:13.053 =================================================================================================================== 00:15:13.053 Total : 998.55 3.90 0.00 0.00 127901.10 5922.51 146800.64 00:15:13.053 0 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2312527 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2312527 ']' 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2312527 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2312527 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2312527' 00:15:13.053 killing process with pid 2312527 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2312527 00:15:13.053 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.053 00:15:13.053 Latency(us) 00:15:13.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.053 =================================================================================================================== 00:15:13.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.053 [2024-05-15 02:31:58.319036] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2312527 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.L7UzFZjU1y 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L7UzFZjU1y 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L7UzFZjU1y 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L7UzFZjU1y 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.L7UzFZjU1y' 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2313842 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2313842 /var/tmp/bdevperf.sock 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2313842 ']' 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.053 [2024-05-15 02:31:58.638670] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:13.053 [2024-05-15 02:31:58.638754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313842 ] 00:15:13.053 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.053 [2024-05-15 02:31:58.708496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.053 [2024-05-15 02:31:58.815763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.053 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.054 02:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:13.054 02:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:13.054 [2024-05-15 02:31:59.150016] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.054 [2024-05-15 02:31:59.150111] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:13.054 [2024-05-15 02:31:59.150134] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.L7UzFZjU1y 00:15:13.054 request: 00:15:13.054 { 00:15:13.054 "name": "TLSTEST", 00:15:13.054 "trtype": "tcp", 00:15:13.054 "traddr": "10.0.0.2", 00:15:13.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.054 "adrfam": "ipv4", 00:15:13.054 "trsvcid": "4420", 00:15:13.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.054 "psk": "/tmp/tmp.L7UzFZjU1y", 00:15:13.054 "method": "bdev_nvme_attach_controller", 00:15:13.054 "req_id": 1 00:15:13.054 } 00:15:13.054 Got JSON-RPC error response 00:15:13.054 response: 00:15:13.054 { 00:15:13.054 "code": -1, 00:15:13.054 "message": "Operation not permitted" 00:15:13.054 } 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2313842 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2313842 ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2313842 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2313842 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2313842' 00:15:13.054 killing process with pid 2313842 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2313842 00:15:13.054 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.054 00:15:13.054 Latency(us) 00:15:13.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.054 =================================================================================================================== 00:15:13.054 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2313842 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2312232 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2312232 ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2312232 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2312232 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2312232' 00:15:13.054 killing process with pid 2312232 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2312232 00:15:13.054 [2024-05-15 02:31:59.489588] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:13.054 [2024-05-15 02:31:59.489657] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2312232 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2313991 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2313991 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2313991 ']' 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.054 02:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.054 [2024-05-15 02:31:59.847818] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:13.054 [2024-05-15 02:31:59.847913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.054 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.054 [2024-05-15 02:31:59.930986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.054 [2024-05-15 02:32:00.051990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.054 [2024-05-15 02:32:00.052055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.054 [2024-05-15 02:32:00.052069] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.054 [2024-05-15 02:32:00.052081] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.054 [2024-05-15 02:32:00.052091] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.054 [2024-05-15 02:32:00.052123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.L7UzFZjU1y 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.L7UzFZjU1y 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.L7UzFZjU1y 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L7UzFZjU1y 00:15:13.622 02:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:13.622 [2024-05-15 02:32:01.025534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.882 02:32:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.140 02:32:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:14.398 [2024-05-15 02:32:01.578988] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:14.398 [2024-05-15 02:32:01.579106] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.398 [2024-05-15 02:32:01.579324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.398 02:32:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:14.656 malloc0 00:15:14.656 02:32:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:14.914 02:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:15.174 [2024-05-15 02:32:02.427866] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:15.174 [2024-05-15 02:32:02.427912] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:15.174 [2024-05-15 02:32:02.427957] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:15.174 request: 00:15:15.174 { 00:15:15.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.174 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.174 "psk": "/tmp/tmp.L7UzFZjU1y", 00:15:15.174 "method": "nvmf_subsystem_add_host", 00:15:15.174 "req_id": 1 00:15:15.174 } 00:15:15.174 Got JSON-RPC error response 00:15:15.174 response: 00:15:15.174 { 00:15:15.174 "code": -32603, 00:15:15.174 "message": "Internal error" 00:15:15.174 } 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2313991 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2313991 ']' 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2313991 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2313991 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2313991' 00:15:15.174 killing process with pid 2313991 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2313991 00:15:15.174 [2024-05-15 02:32:02.481272] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:15.174 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2313991 00:15:15.433 02:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.L7UzFZjU1y 00:15:15.433 02:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:15.433 02:32:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.433 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2314416 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2314416 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2314416 ']' 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.434 02:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.434 [2024-05-15 02:32:02.842252] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:15.434 [2024-05-15 02:32:02.842347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.693 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.693 [2024-05-15 02:32:02.924119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.693 [2024-05-15 02:32:03.036077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.693 [2024-05-15 02:32:03.036159] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.693 [2024-05-15 02:32:03.036176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.693 [2024-05-15 02:32:03.036188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.693 [2024-05-15 02:32:03.036200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.693 [2024-05-15 02:32:03.036244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.L7UzFZjU1y 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L7UzFZjU1y 00:15:16.625 02:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:16.625 [2024-05-15 02:32:04.013362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.625 02:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:16.882 02:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:17.140 [2024-05-15 02:32:04.494586] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:17.140 [2024-05-15 02:32:04.494673] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:17.140 [2024-05-15 02:32:04.494897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.140 02:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:17.397 malloc0 00:15:17.397 02:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:17.653 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:17.911 [2024-05-15 02:32:05.244635] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2314707 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2314707 /var/tmp/bdevperf.sock 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2314707 ']' 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:17.911 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.911 [2024-05-15 02:32:05.305498] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:17.911 [2024-05-15 02:32:05.305573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314707 ] 00:15:18.189 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.189 [2024-05-15 02:32:05.373569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.189 [2024-05-15 02:32:05.477906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.460 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:18.460 02:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:18.460 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:18.460 [2024-05-15 02:32:05.832013] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.460 [2024-05-15 02:32:05.832139] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:18.717 TLSTESTn1 00:15:18.717 02:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:15:18.974 02:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:18.974 "subsystems": [ 00:15:18.974 { 00:15:18.974 "subsystem": "keyring", 00:15:18.974 "config": [] 00:15:18.974 }, 00:15:18.974 { 00:15:18.974 "subsystem": "iobuf", 00:15:18.974 "config": [ 00:15:18.974 { 00:15:18.974 "method": "iobuf_set_options", 00:15:18.974 "params": { 00:15:18.974 "small_pool_count": 8192, 00:15:18.974 "large_pool_count": 1024, 00:15:18.974 "small_bufsize": 8192, 00:15:18.974 "large_bufsize": 135168 00:15:18.974 } 00:15:18.974 } 00:15:18.974 ] 00:15:18.974 }, 00:15:18.974 { 00:15:18.974 "subsystem": "sock", 00:15:18.974 "config": [ 00:15:18.974 { 00:15:18.974 "method": "sock_impl_set_options", 00:15:18.974 "params": { 00:15:18.974 "impl_name": "posix", 00:15:18.974 "recv_buf_size": 2097152, 00:15:18.974 "send_buf_size": 2097152, 00:15:18.974 "enable_recv_pipe": true, 00:15:18.974 "enable_quickack": false, 00:15:18.974 "enable_placement_id": 0, 00:15:18.974 "enable_zerocopy_send_server": true, 00:15:18.974 "enable_zerocopy_send_client": false, 00:15:18.974 "zerocopy_threshold": 0, 00:15:18.974 "tls_version": 0, 00:15:18.974 "enable_ktls": false 00:15:18.974 } 00:15:18.974 }, 00:15:18.974 { 00:15:18.974 "method": "sock_impl_set_options", 00:15:18.974 "params": { 00:15:18.974 "impl_name": "ssl", 00:15:18.974 "recv_buf_size": 4096, 00:15:18.974 "send_buf_size": 4096, 00:15:18.974 "enable_recv_pipe": true, 00:15:18.975 "enable_quickack": false, 00:15:18.975 "enable_placement_id": 0, 00:15:18.975 "enable_zerocopy_send_server": true, 00:15:18.975 "enable_zerocopy_send_client": false, 00:15:18.975 "zerocopy_threshold": 0, 00:15:18.975 "tls_version": 0, 00:15:18.975 "enable_ktls": false 00:15:18.975 } 00:15:18.975 } 00:15:18.975 ] 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "subsystem": "vmd", 00:15:18.975 "config": [] 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "subsystem": "accel", 00:15:18.975 "config": [ 00:15:18.975 { 00:15:18.975 "method": "accel_set_options", 00:15:18.975 "params": { 00:15:18.975 "small_cache_size": 128, 00:15:18.975 "large_cache_size": 16, 00:15:18.975 "task_count": 2048, 00:15:18.975 "sequence_count": 2048, 00:15:18.975 "buf_count": 2048 00:15:18.975 } 00:15:18.975 } 00:15:18.975 ] 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "subsystem": "bdev", 00:15:18.975 "config": [ 00:15:18.975 { 00:15:18.975 "method": "bdev_set_options", 00:15:18.975 "params": { 00:15:18.975 "bdev_io_pool_size": 65535, 00:15:18.975 "bdev_io_cache_size": 256, 00:15:18.975 "bdev_auto_examine": true, 00:15:18.975 "iobuf_small_cache_size": 128, 00:15:18.975 "iobuf_large_cache_size": 16 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "bdev_raid_set_options", 00:15:18.975 "params": { 00:15:18.975 "process_window_size_kb": 1024 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "bdev_iscsi_set_options", 00:15:18.975 "params": { 00:15:18.975 "timeout_sec": 30 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "bdev_nvme_set_options", 00:15:18.975 "params": { 00:15:18.975 "action_on_timeout": "none", 00:15:18.975 "timeout_us": 0, 00:15:18.975 "timeout_admin_us": 0, 00:15:18.975 "keep_alive_timeout_ms": 10000, 00:15:18.975 "arbitration_burst": 0, 00:15:18.975 "low_priority_weight": 0, 00:15:18.975 "medium_priority_weight": 0, 00:15:18.975 "high_priority_weight": 0, 00:15:18.975 "nvme_adminq_poll_period_us": 10000, 00:15:18.975 "nvme_ioq_poll_period_us": 0, 00:15:18.975 "io_queue_requests": 0, 00:15:18.975 "delay_cmd_submit": true, 00:15:18.975 "transport_retry_count": 4, 00:15:18.975 "bdev_retry_count": 3, 00:15:18.975 "transport_ack_timeout": 0, 00:15:18.975 "ctrlr_loss_timeout_sec": 0, 00:15:18.975 "reconnect_delay_sec": 0, 00:15:18.975 "fast_io_fail_timeout_sec": 0, 00:15:18.975 "disable_auto_failback": false, 00:15:18.975 "generate_uuids": false, 00:15:18.975 "transport_tos": 0, 00:15:18.975 "nvme_error_stat": false, 00:15:18.975 "rdma_srq_size": 0, 00:15:18.975 "io_path_stat": false, 00:15:18.975 "allow_accel_sequence": false, 00:15:18.975 "rdma_max_cq_size": 0, 00:15:18.975 "rdma_cm_event_timeout_ms": 0, 00:15:18.975 "dhchap_digests": [ 00:15:18.975 "sha256", 00:15:18.975 "sha384", 00:15:18.975 "sha512" 00:15:18.975 ], 00:15:18.975 "dhchap_dhgroups": [ 00:15:18.975 "null", 00:15:18.975 "ffdhe2048", 00:15:18.975 "ffdhe3072", 00:15:18.975 "ffdhe4096", 00:15:18.975 "ffdhe6144", 00:15:18.975 "ffdhe8192" 00:15:18.975 ] 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "bdev_nvme_set_hotplug", 00:15:18.975 "params": { 00:15:18.975 "period_us": 100000, 00:15:18.975 "enable": false 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "bdev_malloc_create", 00:15:18.975 "params": { 00:15:18.975 "name": "malloc0", 00:15:18.975 "num_blocks": 8192, 00:15:18.975 "block_size": 4096, 00:15:18.975 "physical_block_size": 4096, 00:15:18.975 "uuid": "834b4c2d-7dfa-420e-aff2-957516ee979c", 00:15:18.975 "optimal_io_boundary": 0 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "bdev_wait_for_examine" 00:15:18.975 } 00:15:18.975 ] 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "subsystem": "nbd", 00:15:18.975 "config": [] 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "subsystem": "scheduler", 00:15:18.975 "config": [ 00:15:18.975 { 00:15:18.975 "method": "framework_set_scheduler", 00:15:18.975 "params": { 00:15:18.975 "name": "static" 00:15:18.975 } 00:15:18.975 } 00:15:18.975 ] 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "subsystem": "nvmf", 00:15:18.975 "config": [ 00:15:18.975 { 00:15:18.975 "method": "nvmf_set_config", 00:15:18.975 "params": { 00:15:18.975 "discovery_filter": "match_any", 00:15:18.975 "admin_cmd_passthru": { 00:15:18.975 "identify_ctrlr": false 00:15:18.975 } 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_set_max_subsystems", 00:15:18.975 "params": { 00:15:18.975 "max_subsystems": 1024 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_set_crdt", 00:15:18.975 "params": { 00:15:18.975 "crdt1": 0, 00:15:18.975 "crdt2": 0, 00:15:18.975 "crdt3": 0 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_create_transport", 00:15:18.975 "params": { 00:15:18.975 "trtype": "TCP", 00:15:18.975 "max_queue_depth": 128, 00:15:18.975 "max_io_qpairs_per_ctrlr": 127, 00:15:18.975 "in_capsule_data_size": 4096, 00:15:18.975 "max_io_size": 131072, 00:15:18.975 "io_unit_size": 131072, 00:15:18.975 "max_aq_depth": 128, 00:15:18.975 "num_shared_buffers": 511, 00:15:18.975 "buf_cache_size": 4294967295, 00:15:18.975 "dif_insert_or_strip": false, 00:15:18.975 "zcopy": false, 00:15:18.975 "c2h_success": false, 00:15:18.975 "sock_priority": 0, 00:15:18.975 "abort_timeout_sec": 1, 00:15:18.975 "ack_timeout": 0, 00:15:18.975 "data_wr_pool_size": 0 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_create_subsystem", 00:15:18.975 "params": { 00:15:18.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.975 "allow_any_host": false, 00:15:18.975 "serial_number": "SPDK00000000000001", 00:15:18.975 "model_number": "SPDK bdev Controller", 00:15:18.975 "max_namespaces": 10, 00:15:18.975 "min_cntlid": 1, 00:15:18.975 "max_cntlid": 65519, 00:15:18.975 "ana_reporting": false 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_subsystem_add_host", 00:15:18.975 "params": { 00:15:18.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.975 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.975 "psk": "/tmp/tmp.L7UzFZjU1y" 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_subsystem_add_ns", 00:15:18.975 "params": { 00:15:18.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.975 "namespace": { 00:15:18.975 "nsid": 1, 00:15:18.975 "bdev_name": "malloc0", 00:15:18.975 "nguid": "834B4C2D7DFA420EAFF2957516EE979C", 00:15:18.975 "uuid": "834b4c2d-7dfa-420e-aff2-957516ee979c", 00:15:18.975 "no_auto_visible": false 00:15:18.975 } 00:15:18.975 } 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "method": "nvmf_subsystem_add_listener", 00:15:18.975 "params": { 00:15:18.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.975 "listen_address": { 00:15:18.975 "trtype": "TCP", 00:15:18.975 "adrfam": "IPv4", 00:15:18.975 "traddr": "10.0.0.2", 00:15:18.975 "trsvcid": "4420" 00:15:18.975 }, 00:15:18.975 "secure_channel": true 00:15:18.975 } 00:15:18.975 } 00:15:18.975 ] 00:15:18.975 } 00:15:18.975 ] 00:15:18.975 }' 00:15:18.975 02:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:19.540 02:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:19.540 "subsystems": [ 00:15:19.540 { 00:15:19.540 "subsystem": "keyring", 00:15:19.540 "config": [] 00:15:19.540 }, 00:15:19.540 { 00:15:19.540 "subsystem": "iobuf", 00:15:19.540 "config": [ 00:15:19.540 { 00:15:19.540 "method": "iobuf_set_options", 00:15:19.540 "params": { 00:15:19.540 "small_pool_count": 8192, 00:15:19.540 "large_pool_count": 1024, 00:15:19.540 "small_bufsize": 8192, 00:15:19.540 "large_bufsize": 135168 00:15:19.540 } 00:15:19.540 } 00:15:19.540 ] 00:15:19.540 }, 00:15:19.540 { 00:15:19.540 "subsystem": "sock", 00:15:19.540 "config": [ 00:15:19.540 { 00:15:19.540 "method": "sock_impl_set_options", 00:15:19.540 "params": { 00:15:19.540 "impl_name": "posix", 00:15:19.540 "recv_buf_size": 2097152, 00:15:19.540 "send_buf_size": 2097152, 00:15:19.540 "enable_recv_pipe": true, 00:15:19.540 "enable_quickack": false, 00:15:19.540 "enable_placement_id": 0, 00:15:19.540 "enable_zerocopy_send_server": true, 00:15:19.540 "enable_zerocopy_send_client": false, 00:15:19.540 "zerocopy_threshold": 0, 00:15:19.540 "tls_version": 0, 00:15:19.540 "enable_ktls": false 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "sock_impl_set_options", 00:15:19.541 "params": { 00:15:19.541 "impl_name": "ssl", 00:15:19.541 "recv_buf_size": 4096, 00:15:19.541 "send_buf_size": 4096, 00:15:19.541 "enable_recv_pipe": true, 00:15:19.541 "enable_quickack": false, 00:15:19.541 "enable_placement_id": 0, 00:15:19.541 "enable_zerocopy_send_server": true, 00:15:19.541 "enable_zerocopy_send_client": false, 00:15:19.541 "zerocopy_threshold": 0, 00:15:19.541 "tls_version": 0, 00:15:19.541 "enable_ktls": false 00:15:19.541 } 00:15:19.541 } 00:15:19.541 ] 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "subsystem": "vmd", 00:15:19.541 "config": [] 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "subsystem": "accel", 00:15:19.541 "config": [ 00:15:19.541 { 00:15:19.541 "method": "accel_set_options", 00:15:19.541 "params": { 00:15:19.541 "small_cache_size": 128, 00:15:19.541 "large_cache_size": 16, 00:15:19.541 "task_count": 2048, 00:15:19.541 "sequence_count": 2048, 00:15:19.541 "buf_count": 2048 00:15:19.541 } 00:15:19.541 } 00:15:19.541 ] 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "subsystem": "bdev", 00:15:19.541 "config": [ 00:15:19.541 { 00:15:19.541 "method": "bdev_set_options", 00:15:19.541 "params": { 00:15:19.541 "bdev_io_pool_size": 65535, 00:15:19.541 "bdev_io_cache_size": 256, 00:15:19.541 "bdev_auto_examine": true, 00:15:19.541 "iobuf_small_cache_size": 128, 00:15:19.541 "iobuf_large_cache_size": 16 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "bdev_raid_set_options", 00:15:19.541 "params": { 00:15:19.541 "process_window_size_kb": 1024 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "bdev_iscsi_set_options", 00:15:19.541 "params": { 00:15:19.541 "timeout_sec": 30 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "bdev_nvme_set_options", 00:15:19.541 "params": { 00:15:19.541 "action_on_timeout": "none", 00:15:19.541 "timeout_us": 0, 00:15:19.541 "timeout_admin_us": 0, 00:15:19.541 "keep_alive_timeout_ms": 10000, 00:15:19.541 "arbitration_burst": 0, 00:15:19.541 "low_priority_weight": 0, 00:15:19.541 "medium_priority_weight": 0, 00:15:19.541 "high_priority_weight": 0, 00:15:19.541 "nvme_adminq_poll_period_us": 10000, 00:15:19.541 "nvme_ioq_poll_period_us": 0, 00:15:19.541 "io_queue_requests": 512, 00:15:19.541 "delay_cmd_submit": true, 00:15:19.541 "transport_retry_count": 4, 00:15:19.541 "bdev_retry_count": 3, 00:15:19.541 "transport_ack_timeout": 0, 00:15:19.541 "ctrlr_loss_timeout_sec": 0, 00:15:19.541 "reconnect_delay_sec": 0, 00:15:19.541 "fast_io_fail_timeout_sec": 0, 00:15:19.541 "disable_auto_failback": false, 00:15:19.541 "generate_uuids": false, 00:15:19.541 "transport_tos": 0, 00:15:19.541 "nvme_error_stat": false, 00:15:19.541 "rdma_srq_size": 0, 00:15:19.541 "io_path_stat": false, 00:15:19.541 "allow_accel_sequence": false, 00:15:19.541 "rdma_max_cq_size": 0, 00:15:19.541 "rdma_cm_event_timeout_ms": 0, 00:15:19.541 "dhchap_digests": [ 00:15:19.541 "sha256", 00:15:19.541 "sha384", 00:15:19.541 "sha512" 00:15:19.541 ], 00:15:19.541 "dhchap_dhgroups": [ 00:15:19.541 "null", 00:15:19.541 "ffdhe2048", 00:15:19.541 "ffdhe3072", 00:15:19.541 "ffdhe4096", 00:15:19.541 "ffdhe6144", 00:15:19.541 "ffdhe8192" 00:15:19.541 ] 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "bdev_nvme_attach_controller", 00:15:19.541 "params": { 00:15:19.541 "name": "TLSTEST", 00:15:19.541 "trtype": "TCP", 00:15:19.541 "adrfam": "IPv4", 00:15:19.541 "traddr": "10.0.0.2", 00:15:19.541 "trsvcid": "4420", 00:15:19.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.541 "prchk_reftag": false, 00:15:19.541 "prchk_guard": false, 00:15:19.541 "ctrlr_loss_timeout_sec": 0, 00:15:19.541 "reconnect_delay_sec": 0, 00:15:19.541 "fast_io_fail_timeout_sec": 0, 00:15:19.541 "psk": "/tmp/tmp.L7UzFZjU1y", 00:15:19.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.541 "hdgst": false, 00:15:19.541 "ddgst": false 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "bdev_nvme_set_hotplug", 00:15:19.541 "params": { 00:15:19.541 "period_us": 100000, 00:15:19.541 "enable": false 00:15:19.541 } 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "method": "bdev_wait_for_examine" 00:15:19.541 } 00:15:19.541 ] 00:15:19.541 }, 00:15:19.541 { 00:15:19.541 "subsystem": "nbd", 00:15:19.541 "config": [] 00:15:19.541 } 00:15:19.541 ] 00:15:19.541 }' 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2314707 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2314707 ']' 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2314707 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2314707 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2314707' 00:15:19.541 killing process with pid 2314707 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2314707 00:15:19.541 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.541 00:15:19.541 Latency(us) 00:15:19.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.541 =================================================================================================================== 00:15:19.541 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.541 [2024-05-15 02:32:06.689648] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2314707 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2314416 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2314416 ']' 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2314416 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:19.541 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2314416 00:15:19.799 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:19.799 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:19.799 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2314416' 00:15:19.799 killing process with pid 2314416 00:15:19.799 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2314416 00:15:19.799 [2024-05-15 02:32:06.978047] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:19.799 [2024-05-15 02:32:06.978108] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:19.799 02:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2314416 00:15:20.058 02:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:20.058 02:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.058 02:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:20.058 "subsystems": [ 00:15:20.058 { 00:15:20.058 "subsystem": "keyring", 00:15:20.058 "config": [] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "iobuf", 00:15:20.058 "config": [ 00:15:20.058 { 00:15:20.058 "method": "iobuf_set_options", 00:15:20.058 "params": { 00:15:20.058 "small_pool_count": 8192, 00:15:20.058 "large_pool_count": 1024, 00:15:20.058 "small_bufsize": 8192, 00:15:20.058 "large_bufsize": 135168 00:15:20.058 } 00:15:20.058 } 00:15:20.058 ] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "sock", 00:15:20.058 "config": [ 00:15:20.058 { 00:15:20.058 "method": "sock_impl_set_options", 00:15:20.058 "params": { 00:15:20.058 "impl_name": "posix", 00:15:20.058 "recv_buf_size": 2097152, 00:15:20.058 "send_buf_size": 2097152, 00:15:20.058 "enable_recv_pipe": true, 00:15:20.058 "enable_quickack": false, 00:15:20.058 "enable_placement_id": 0, 00:15:20.058 "enable_zerocopy_send_server": true, 00:15:20.058 "enable_zerocopy_send_client": false, 00:15:20.058 "zerocopy_threshold": 0, 00:15:20.058 "tls_version": 0, 00:15:20.058 "enable_ktls": false 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "sock_impl_set_options", 00:15:20.058 "params": { 00:15:20.058 "impl_name": "ssl", 00:15:20.058 "recv_buf_size": 4096, 00:15:20.058 "send_buf_size": 4096, 00:15:20.058 "enable_recv_pipe": true, 00:15:20.058 "enable_quickack": false, 00:15:20.058 "enable_placement_id": 0, 00:15:20.058 "enable_zerocopy_send_server": true, 00:15:20.058 "enable_zerocopy_send_client": false, 00:15:20.058 "zerocopy_threshold": 0, 00:15:20.058 "tls_version": 0, 00:15:20.058 "enable_ktls": false 00:15:20.058 } 00:15:20.058 } 00:15:20.058 ] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "vmd", 00:15:20.058 "config": [] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "accel", 00:15:20.058 "config": [ 00:15:20.058 { 00:15:20.058 "method": "accel_set_options", 00:15:20.058 "params": { 00:15:20.058 "small_cache_size": 128, 00:15:20.058 "large_cache_size": 16, 00:15:20.058 "task_count": 2048, 00:15:20.058 "sequence_count": 2048, 00:15:20.058 "buf_count": 2048 00:15:20.058 } 00:15:20.058 } 00:15:20.058 ] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "bdev", 00:15:20.058 "config": [ 00:15:20.058 { 00:15:20.058 "method": "bdev_set_options", 00:15:20.058 "params": { 00:15:20.058 "bdev_io_pool_size": 65535, 00:15:20.058 "bdev_io_cache_size": 256, 00:15:20.058 "bdev_auto_examine": true, 00:15:20.058 "iobuf_small_cache_size": 128, 00:15:20.058 "iobuf_large_cache_size": 16 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "bdev_raid_set_options", 00:15:20.058 "params": { 00:15:20.058 "process_window_size_kb": 1024 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "bdev_iscsi_set_options", 00:15:20.058 "params": { 00:15:20.058 "timeout_sec": 30 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "bdev_nvme_set_options", 00:15:20.058 "params": { 00:15:20.058 "action_on_timeout": "none", 00:15:20.058 "timeout_us": 0, 00:15:20.058 "timeout_admin_us": 0, 00:15:20.058 "keep_alive_timeout_ms": 10000, 00:15:20.058 "arbitration_burst": 0, 00:15:20.058 "low_priority_weight": 0, 00:15:20.058 "medium_priority_weight": 0, 00:15:20.058 "high_priority_weight": 0, 00:15:20.058 "nvme_adminq_poll_period_us": 10000, 00:15:20.058 "nvme_ioq_poll_period_us": 0, 00:15:20.058 "io_queue_requests": 0, 00:15:20.058 "delay_cmd_submit": true, 00:15:20.058 "transport_retry_count": 4, 00:15:20.058 "bdev_retry_count": 3, 00:15:20.058 "transport_ack_timeout": 0, 00:15:20.058 "ctrlr_loss_timeout_sec": 0, 00:15:20.058 "reconnect_delay_sec": 0, 00:15:20.058 "fast_io_fail_timeout_sec": 0, 00:15:20.058 "disable_auto_failback": false, 00:15:20.058 "generate_uuids": false, 00:15:20.058 "transport_tos": 0, 00:15:20.058 "nvme_error_stat": false, 00:15:20.058 "rdma_srq_size": 0, 00:15:20.058 "io_path_stat": false, 00:15:20.058 "allow_accel_sequence": false, 00:15:20.058 "rdma_max_cq_size": 0, 00:15:20.058 "rdma_cm_event_timeout_ms": 0, 00:15:20.058 "dhchap_digests": [ 00:15:20.058 "sha256", 00:15:20.058 "sha384", 00:15:20.058 "sha512" 00:15:20.058 ], 00:15:20.058 "dhchap_dhgroups": [ 00:15:20.058 "null", 00:15:20.058 "ffdhe2048", 00:15:20.058 "ffdhe3072", 00:15:20.058 "ffdhe4096", 00:15:20.058 "ffdhe6144", 00:15:20.058 "ffdhe8192" 00:15:20.058 ] 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "bdev_nvme_set_hotplug", 00:15:20.058 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:20.058 "params": { 00:15:20.058 "period_us": 100000, 00:15:20.058 "enable": false 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "bdev_malloc_create", 00:15:20.058 "params": { 00:15:20.058 "name": "malloc0", 00:15:20.058 "num_blocks": 8192, 00:15:20.058 "block_size": 4096, 00:15:20.058 "physical_block_size": 4096, 00:15:20.058 "uuid": "834b4c2d-7dfa-420e-aff2-957516ee979c", 00:15:20.058 "optimal_io_boundary": 0 00:15:20.058 } 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "method": "bdev_wait_for_examine" 00:15:20.058 } 00:15:20.058 ] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "nbd", 00:15:20.058 "config": [] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "scheduler", 00:15:20.058 "config": [ 00:15:20.058 { 00:15:20.058 "method": "framework_set_scheduler", 00:15:20.058 "params": { 00:15:20.058 "name": "static" 00:15:20.058 } 00:15:20.058 } 00:15:20.058 ] 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "subsystem": "nvmf", 00:15:20.058 "config": [ 00:15:20.058 { 00:15:20.058 "method": "nvmf_set_config", 00:15:20.058 "params": { 00:15:20.058 "discovery_filter": "match_any", 00:15:20.058 "admin_cmd_passthru": { 00:15:20.058 "identify_ctrlr": false 00:15:20.058 } 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_set_max_subsystems", 00:15:20.059 "params": { 00:15:20.059 "max_subsystems": 1024 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_set_crdt", 00:15:20.059 "params": { 00:15:20.059 "crdt1": 0, 00:15:20.059 "crdt2": 0, 00:15:20.059 "crdt3": 0 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_create_transport", 00:15:20.059 "params": { 00:15:20.059 "trtype": "TCP", 00:15:20.059 "max_queue_depth": 128, 00:15:20.059 "max_io_qpairs_per_ctrlr": 127, 00:15:20.059 "in_capsule_data_size": 4096, 00:15:20.059 "max_io_size": 131072, 00:15:20.059 "io_unit_size": 131072, 00:15:20.059 "max_aq_depth": 128, 00:15:20.059 "num_shared_buffers": 511, 00:15:20.059 "buf_cache_size": 4294967295, 00:15:20.059 "dif_insert_or_strip": false, 00:15:20.059 "zcopy": false, 00:15:20.059 "c2h_success": false, 00:15:20.059 "sock_priority": 0, 00:15:20.059 "abort_timeout_sec": 1, 00:15:20.059 "ack_timeout": 0, 00:15:20.059 "data_wr_pool_size": 0 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_create_subsystem", 00:15:20.059 "params": { 00:15:20.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.059 "allow_any_host": false, 00:15:20.059 "serial_number": "SPDK00000000000001", 00:15:20.059 "model_number": "SPDK bdev Controller", 00:15:20.059 "max_namespaces": 10, 00:15:20.059 "min_cntlid": 1, 00:15:20.059 "max_cntlid": 65519, 00:15:20.059 "ana_reporting": false 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_subsystem_add_host", 00:15:20.059 "params": { 00:15:20.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.059 "host": "nqn.2016-06.io.spdk:host1", 00:15:20.059 "psk": "/tmp/tmp.L7UzFZjU1y" 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_subsystem_add_ns", 00:15:20.059 "params": { 00:15:20.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.059 "namespace": { 00:15:20.059 "nsid": 1, 00:15:20.059 "bdev_name": "malloc0", 00:15:20.059 "nguid": "834B4C2D7DFA420EAFF2957516EE979C", 00:15:20.059 "uuid": "834b4c2d-7dfa-420e-aff2-957516ee979c", 00:15:20.059 "no_auto_visible": false 00:15:20.059 } 00:15:20.059 } 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "method": "nvmf_subsystem_add_listener", 00:15:20.059 "params": { 00:15:20.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.059 "listen_address": { 00:15:20.059 "trtype": "TCP", 00:15:20.059 "adrfam": "IPv4", 00:15:20.059 "traddr": "10.0.0.2", 00:15:20.059 "trsvcid": "4420" 00:15:20.059 }, 00:15:20.059 "secure_channel": true 00:15:20.059 } 00:15:20.059 } 00:15:20.059 ] 00:15:20.059 } 00:15:20.059 ] 00:15:20.059 }' 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2314987 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2314987 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2314987 ']' 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:20.059 02:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.059 [2024-05-15 02:32:07.336326] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:20.059 [2024-05-15 02:32:07.336427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.059 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.059 [2024-05-15 02:32:07.418289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.316 [2024-05-15 02:32:07.530521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.317 [2024-05-15 02:32:07.530594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.317 [2024-05-15 02:32:07.530609] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.317 [2024-05-15 02:32:07.530623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.317 [2024-05-15 02:32:07.530634] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.317 [2024-05-15 02:32:07.530727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.574 [2024-05-15 02:32:07.763500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.574 [2024-05-15 02:32:07.779442] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:20.574 [2024-05-15 02:32:07.795463] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:20.574 [2024-05-15 02:32:07.795545] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.574 [2024-05-15 02:32:07.805138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2315138 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2315138 /var/tmp/bdevperf.sock 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2315138 ']' 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:21.140 02:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:21.140 "subsystems": [ 00:15:21.140 { 00:15:21.140 "subsystem": "keyring", 00:15:21.140 "config": [] 00:15:21.140 }, 00:15:21.140 { 00:15:21.140 "subsystem": "iobuf", 00:15:21.140 "config": [ 00:15:21.140 { 00:15:21.140 "method": "iobuf_set_options", 00:15:21.140 "params": { 00:15:21.140 "small_pool_count": 8192, 00:15:21.140 "large_pool_count": 1024, 00:15:21.140 "small_bufsize": 8192, 00:15:21.140 "large_bufsize": 135168 00:15:21.140 } 00:15:21.140 } 00:15:21.140 ] 00:15:21.140 }, 00:15:21.140 { 00:15:21.140 "subsystem": "sock", 00:15:21.140 "config": [ 00:15:21.140 { 00:15:21.140 "method": "sock_impl_set_options", 00:15:21.140 "params": { 00:15:21.140 "impl_name": "posix", 00:15:21.140 "recv_buf_size": 2097152, 00:15:21.141 "send_buf_size": 2097152, 00:15:21.141 "enable_recv_pipe": true, 00:15:21.141 "enable_quickack": false, 00:15:21.141 "enable_placement_id": 0, 00:15:21.141 "enable_zerocopy_send_server": true, 00:15:21.141 "enable_zerocopy_send_client": false, 00:15:21.141 "zerocopy_threshold": 0, 00:15:21.141 "tls_version": 0, 00:15:21.141 "enable_ktls": false 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "sock_impl_set_options", 00:15:21.141 "params": { 00:15:21.141 "impl_name": "ssl", 00:15:21.141 "recv_buf_size": 4096, 00:15:21.141 "send_buf_size": 4096, 00:15:21.141 "enable_recv_pipe": true, 00:15:21.141 "enable_quickack": false, 00:15:21.141 "enable_placement_id": 0, 00:15:21.141 "enable_zerocopy_send_server": true, 00:15:21.141 "enable_zerocopy_send_client": false, 00:15:21.141 "zerocopy_threshold": 0, 00:15:21.141 "tls_version": 0, 00:15:21.141 "enable_ktls": false 00:15:21.141 } 00:15:21.141 } 00:15:21.141 ] 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "subsystem": "vmd", 00:15:21.141 "config": [] 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "subsystem": "accel", 00:15:21.141 "config": [ 00:15:21.141 { 00:15:21.141 "method": "accel_set_options", 00:15:21.141 "params": { 00:15:21.141 "small_cache_size": 128, 00:15:21.141 "large_cache_size": 16, 00:15:21.141 "task_count": 2048, 00:15:21.141 "sequence_count": 2048, 00:15:21.141 "buf_count": 2048 00:15:21.141 } 00:15:21.141 } 00:15:21.141 ] 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "subsystem": "bdev", 00:15:21.141 "config": [ 00:15:21.141 { 00:15:21.141 "method": "bdev_set_options", 00:15:21.141 "params": { 00:15:21.141 "bdev_io_pool_size": 65535, 00:15:21.141 "bdev_io_cache_size": 256, 00:15:21.141 "bdev_auto_examine": true, 00:15:21.141 "iobuf_small_cache_size": 128, 00:15:21.141 "iobuf_large_cache_size": 16 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "bdev_raid_set_options", 00:15:21.141 "params": { 00:15:21.141 "process_window_size_kb": 1024 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "bdev_iscsi_set_options", 00:15:21.141 "params": { 00:15:21.141 "timeout_sec": 30 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "bdev_nvme_set_options", 00:15:21.141 "params": { 00:15:21.141 "action_on_timeout": "none", 00:15:21.141 "timeout_us": 0, 00:15:21.141 "timeout_admin_us": 0, 00:15:21.141 "keep_alive_timeout_ms": 10000, 00:15:21.141 "arbitration_burst": 0, 00:15:21.141 "low_priority_weight": 0, 00:15:21.141 "medium_priority_weight": 0, 00:15:21.141 "high_priority_weight": 0, 00:15:21.141 "nvme_adminq_poll_period_us": 10000, 00:15:21.141 "nvme_ioq_poll_period_us": 0, 00:15:21.141 "io_queue_requests": 512, 00:15:21.141 "delay_cmd_submit": true, 00:15:21.141 "transport_retry_count": 4, 00:15:21.141 "bdev_retry_count": 3, 00:15:21.141 "transport_ack_timeout": 0, 00:15:21.141 "ctrlr_loss_timeout_sec": 0, 00:15:21.141 "reconnect_delay_sec": 0, 00:15:21.141 "fast_io_fail_timeout_sec": 0, 00:15:21.141 "disable_auto_failback": false, 00:15:21.141 "generate_uuids": false, 00:15:21.141 "transport_tos": 0, 00:15:21.141 "nvme_error_stat": false, 00:15:21.141 "rdma_srq_size": 0, 00:15:21.141 "io_path_stat": false, 00:15:21.141 "allow_accel_sequence": false, 00:15:21.141 "rdma_max_cq_size": 0, 00:15:21.141 "rdma_cm_event_timeout_ms": 0, 00:15:21.141 "dhchap_digests": [ 00:15:21.141 "sha256", 00:15:21.141 "sha384", 00:15:21.141 "sha512" 00:15:21.141 ], 00:15:21.141 "dhchap_dhgroups": [ 00:15:21.141 "null", 00:15:21.141 "ffdhe2048", 00:15:21.141 "ffdhe3072", 00:15:21.141 "ffdhe4096", 00:15:21.141 "ffdhe6144", 00:15:21.141 "ffdhe8192" 00:15:21.141 ] 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "bdev_nvme_attach_controller", 00:15:21.141 "params": { 00:15:21.141 "name": "TLSTEST", 00:15:21.141 "trtype": "TCP", 00:15:21.141 "adrfam": "IPv4", 00:15:21.141 "traddr": "10.0.0.2", 00:15:21.141 "trsvcid": "4420", 00:15:21.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:21.141 "prchk_reftag": false, 00:15:21.141 "prchk_guard": false, 00:15:21.141 "ctrlr_loss_timeout_sec": 0, 00:15:21.141 "reconnect_delay_sec": 0, 00:15:21.141 "fast_io_fail_timeout_sec": 0, 00:15:21.141 "psk": "/tmp/tmp.L7UzFZjU1y", 00:15:21.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:21.141 "hdgst": false, 00:15:21.141 "ddgst": false 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "bdev_nvme_set_hotplug", 00:15:21.141 "params": { 00:15:21.141 "period_us": 100000, 00:15:21.141 "enable": false 00:15:21.141 } 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "method": "bdev_wait_for_examine" 00:15:21.141 } 00:15:21.141 ] 00:15:21.141 }, 00:15:21.141 { 00:15:21.141 "subsystem": "nbd", 00:15:21.141 "config": [] 00:15:21.141 } 00:15:21.141 ] 00:15:21.141 }' 00:15:21.141 02:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.141 [2024-05-15 02:32:08.321564] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:21.141 [2024-05-15 02:32:08.321641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315138 ] 00:15:21.141 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.141 [2024-05-15 02:32:08.390599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.141 [2024-05-15 02:32:08.501254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.400 [2024-05-15 02:32:08.664575] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:21.400 [2024-05-15 02:32:08.664722] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:21.964 02:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.964 02:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:21.964 02:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:22.222 Running I/O for 10 seconds... 00:15:32.185 00:15:32.186 Latency(us) 00:15:32.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.186 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:32.186 Verification LBA range: start 0x0 length 0x2000 00:15:32.186 TLSTESTn1 : 10.08 1442.10 5.63 0.00 0.00 88452.49 10048.85 125829.12 00:15:32.186 =================================================================================================================== 00:15:32.186 Total : 1442.10 5.63 0.00 0.00 88452.49 10048.85 125829.12 00:15:32.186 0 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2315138 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2315138 ']' 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2315138 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2315138 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2315138' 00:15:32.186 killing process with pid 2315138 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2315138 00:15:32.186 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.186 00:15:32.186 Latency(us) 00:15:32.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.186 =================================================================================================================== 00:15:32.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.186 [2024-05-15 02:32:19.579082] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:32.186 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2315138 00:15:32.444 02:32:19 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2314987 00:15:32.444 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2314987 ']' 00:15:32.444 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2314987 00:15:32.444 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:32.444 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:32.444 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2314987 00:15:32.702 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:32.702 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:32.702 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2314987' 00:15:32.702 killing process with pid 2314987 00:15:32.702 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2314987 00:15:32.702 [2024-05-15 02:32:19.880087] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:32.702 [2024-05-15 02:32:19.880143] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:32.702 02:32:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2314987 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2316474 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2316474 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2316474 ']' 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.960 02:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.960 [2024-05-15 02:32:20.230153] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:32.960 [2024-05-15 02:32:20.230251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.960 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.960 [2024-05-15 02:32:20.314306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.218 [2024-05-15 02:32:20.424156] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.218 [2024-05-15 02:32:20.424229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.218 [2024-05-15 02:32:20.424253] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.218 [2024-05-15 02:32:20.424264] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.218 [2024-05-15 02:32:20.424289] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.218 [2024-05-15 02:32:20.424315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.L7UzFZjU1y 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L7UzFZjU1y 00:15:33.783 02:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:34.041 [2024-05-15 02:32:21.416023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.041 02:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.299 02:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:34.557 [2024-05-15 02:32:21.933371] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:34.557 [2024-05-15 02:32:21.933456] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.557 [2024-05-15 02:32:21.933666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.557 02:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:34.815 malloc0 00:15:34.815 02:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.073 02:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L7UzFZjU1y 00:15:35.332 [2024-05-15 02:32:22.667089] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2316848 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2316848 /var/tmp/bdevperf.sock 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2316848 ']' 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:35.332 02:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.332 [2024-05-15 02:32:22.729370] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:35.332 [2024-05-15 02:32:22.729440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316848 ] 00:15:35.590 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.590 [2024-05-15 02:32:22.798858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.590 [2024-05-15 02:32:22.912238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.848 02:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:35.848 02:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:35.848 02:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L7UzFZjU1y 00:15:36.105 02:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:36.362 [2024-05-15 02:32:23.590809] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.362 nvme0n1 00:15:36.362 02:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:36.619 Running I/O for 1 seconds... 00:15:37.552 00:15:37.552 Latency(us) 00:15:37.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.552 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:37.552 Verification LBA range: start 0x0 length 0x2000 00:15:37.552 nvme0n1 : 1.09 1358.65 5.31 0.00 0.00 91165.26 9320.68 145247.19 00:15:37.552 =================================================================================================================== 00:15:37.552 Total : 1358.65 5.31 0.00 0.00 91165.26 9320.68 145247.19 00:15:37.552 0 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2316848 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2316848 ']' 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2316848 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2316848 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2316848' 00:15:37.552 killing process with pid 2316848 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2316848 00:15:37.552 Received shutdown signal, test time was about 1.000000 seconds 00:15:37.552 00:15:37.552 Latency(us) 00:15:37.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.552 =================================================================================================================== 00:15:37.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.552 02:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2316848 00:15:37.810 02:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2316474 00:15:37.810 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2316474 ']' 00:15:37.810 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2316474 00:15:37.810 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:37.810 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:37.810 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2316474 00:15:38.068 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:38.068 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:38.068 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2316474' 00:15:38.068 killing process with pid 2316474 00:15:38.068 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2316474 00:15:38.068 [2024-05-15 02:32:25.234779] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:38.068 [2024-05-15 02:32:25.234836] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:38.068 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2316474 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2317170 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2317170 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2317170 ']' 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.325 02:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.325 [2024-05-15 02:32:25.564890] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:38.325 [2024-05-15 02:32:25.565013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.325 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.325 [2024-05-15 02:32:25.638472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.585 [2024-05-15 02:32:25.744254] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.585 [2024-05-15 02:32:25.744308] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.585 [2024-05-15 02:32:25.744337] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.585 [2024-05-15 02:32:25.744348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.585 [2024-05-15 02:32:25.744358] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.585 [2024-05-15 02:32:25.744385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.203 [2024-05-15 02:32:26.539767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.203 malloc0 00:15:39.203 [2024-05-15 02:32:26.572483] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:39.203 [2024-05-15 02:32:26.572587] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:39.203 [2024-05-15 02:32:26.572821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2317320 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2317320 /var/tmp/bdevperf.sock 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2317320 ']' 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:39.203 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.462 [2024-05-15 02:32:26.645490] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:39.462 [2024-05-15 02:32:26.645564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317320 ] 00:15:39.462 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.462 [2024-05-15 02:32:26.723670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.462 [2024-05-15 02:32:26.839612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.723 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:39.723 02:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:39.723 02:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L7UzFZjU1y 00:15:39.980 02:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:40.237 [2024-05-15 02:32:27.429594] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:40.237 nvme0n1 00:15:40.237 02:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.237 Running I/O for 1 seconds... 00:15:41.609 00:15:41.609 Latency(us) 00:15:41.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.609 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:41.609 Verification LBA range: start 0x0 length 0x2000 00:15:41.609 nvme0n1 : 1.10 1248.21 4.88 0.00 0.00 98811.81 6602.15 136703.24 00:15:41.609 =================================================================================================================== 00:15:41.610 Total : 1248.21 4.88 0.00 0.00 98811.81 6602.15 136703.24 00:15:41.610 0 00:15:41.610 02:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:41.610 02:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.610 02:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.610 02:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.610 02:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:15:41.610 "subsystems": [ 00:15:41.610 { 00:15:41.610 "subsystem": "keyring", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "keyring_file_add_key", 00:15:41.610 "params": { 00:15:41.610 "name": "key0", 00:15:41.610 "path": "/tmp/tmp.L7UzFZjU1y" 00:15:41.610 } 00:15:41.610 } 00:15:41.610 ] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "iobuf", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "iobuf_set_options", 00:15:41.610 "params": { 00:15:41.610 "small_pool_count": 8192, 00:15:41.610 "large_pool_count": 1024, 00:15:41.610 "small_bufsize": 8192, 00:15:41.610 "large_bufsize": 135168 00:15:41.610 } 00:15:41.610 } 00:15:41.610 ] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "sock", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "sock_impl_set_options", 00:15:41.610 "params": { 00:15:41.610 "impl_name": "posix", 00:15:41.610 "recv_buf_size": 2097152, 00:15:41.610 "send_buf_size": 2097152, 00:15:41.610 "enable_recv_pipe": true, 00:15:41.610 "enable_quickack": false, 00:15:41.610 "enable_placement_id": 0, 00:15:41.610 "enable_zerocopy_send_server": true, 00:15:41.610 "enable_zerocopy_send_client": false, 00:15:41.610 "zerocopy_threshold": 0, 00:15:41.610 "tls_version": 0, 00:15:41.610 "enable_ktls": false 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "sock_impl_set_options", 00:15:41.610 "params": { 00:15:41.610 "impl_name": "ssl", 00:15:41.610 "recv_buf_size": 4096, 00:15:41.610 "send_buf_size": 4096, 00:15:41.610 "enable_recv_pipe": true, 00:15:41.610 "enable_quickack": false, 00:15:41.610 "enable_placement_id": 0, 00:15:41.610 "enable_zerocopy_send_server": true, 00:15:41.610 "enable_zerocopy_send_client": false, 00:15:41.610 "zerocopy_threshold": 0, 00:15:41.610 "tls_version": 0, 00:15:41.610 "enable_ktls": false 00:15:41.610 } 00:15:41.610 } 00:15:41.610 ] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "vmd", 00:15:41.610 "config": [] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "accel", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "accel_set_options", 00:15:41.610 "params": { 00:15:41.610 "small_cache_size": 128, 00:15:41.610 "large_cache_size": 16, 00:15:41.610 "task_count": 2048, 00:15:41.610 "sequence_count": 2048, 00:15:41.610 "buf_count": 2048 00:15:41.610 } 00:15:41.610 } 00:15:41.610 ] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "bdev", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "bdev_set_options", 00:15:41.610 "params": { 00:15:41.610 "bdev_io_pool_size": 65535, 00:15:41.610 "bdev_io_cache_size": 256, 00:15:41.610 "bdev_auto_examine": true, 00:15:41.610 "iobuf_small_cache_size": 128, 00:15:41.610 "iobuf_large_cache_size": 16 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "bdev_raid_set_options", 00:15:41.610 "params": { 00:15:41.610 "process_window_size_kb": 1024 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "bdev_iscsi_set_options", 00:15:41.610 "params": { 00:15:41.610 "timeout_sec": 30 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "bdev_nvme_set_options", 00:15:41.610 "params": { 00:15:41.610 "action_on_timeout": "none", 00:15:41.610 "timeout_us": 0, 00:15:41.610 "timeout_admin_us": 0, 00:15:41.610 "keep_alive_timeout_ms": 10000, 00:15:41.610 "arbitration_burst": 0, 00:15:41.610 "low_priority_weight": 0, 00:15:41.610 "medium_priority_weight": 0, 00:15:41.610 "high_priority_weight": 0, 00:15:41.610 "nvme_adminq_poll_period_us": 10000, 00:15:41.610 "nvme_ioq_poll_period_us": 0, 00:15:41.610 "io_queue_requests": 0, 00:15:41.610 "delay_cmd_submit": true, 00:15:41.610 "transport_retry_count": 4, 00:15:41.610 "bdev_retry_count": 3, 00:15:41.610 "transport_ack_timeout": 0, 00:15:41.610 "ctrlr_loss_timeout_sec": 0, 00:15:41.610 "reconnect_delay_sec": 0, 00:15:41.610 "fast_io_fail_timeout_sec": 0, 00:15:41.610 "disable_auto_failback": false, 00:15:41.610 "generate_uuids": false, 00:15:41.610 "transport_tos": 0, 00:15:41.610 "nvme_error_stat": false, 00:15:41.610 "rdma_srq_size": 0, 00:15:41.610 "io_path_stat": false, 00:15:41.610 "allow_accel_sequence": false, 00:15:41.610 "rdma_max_cq_size": 0, 00:15:41.610 "rdma_cm_event_timeout_ms": 0, 00:15:41.610 "dhchap_digests": [ 00:15:41.610 "sha256", 00:15:41.610 "sha384", 00:15:41.610 "sha512" 00:15:41.610 ], 00:15:41.610 "dhchap_dhgroups": [ 00:15:41.610 "null", 00:15:41.610 "ffdhe2048", 00:15:41.610 "ffdhe3072", 00:15:41.610 "ffdhe4096", 00:15:41.610 "ffdhe6144", 00:15:41.610 "ffdhe8192" 00:15:41.610 ] 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "bdev_nvme_set_hotplug", 00:15:41.610 "params": { 00:15:41.610 "period_us": 100000, 00:15:41.610 "enable": false 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "bdev_malloc_create", 00:15:41.610 "params": { 00:15:41.610 "name": "malloc0", 00:15:41.610 "num_blocks": 8192, 00:15:41.610 "block_size": 4096, 00:15:41.610 "physical_block_size": 4096, 00:15:41.610 "uuid": "5ad4c84a-3768-4434-902d-83c27b9c2d71", 00:15:41.610 "optimal_io_boundary": 0 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "bdev_wait_for_examine" 00:15:41.610 } 00:15:41.610 ] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "nbd", 00:15:41.610 "config": [] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "scheduler", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "framework_set_scheduler", 00:15:41.610 "params": { 00:15:41.610 "name": "static" 00:15:41.610 } 00:15:41.610 } 00:15:41.610 ] 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "subsystem": "nvmf", 00:15:41.610 "config": [ 00:15:41.610 { 00:15:41.610 "method": "nvmf_set_config", 00:15:41.610 "params": { 00:15:41.610 "discovery_filter": "match_any", 00:15:41.610 "admin_cmd_passthru": { 00:15:41.610 "identify_ctrlr": false 00:15:41.610 } 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_set_max_subsystems", 00:15:41.610 "params": { 00:15:41.610 "max_subsystems": 1024 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_set_crdt", 00:15:41.610 "params": { 00:15:41.610 "crdt1": 0, 00:15:41.610 "crdt2": 0, 00:15:41.610 "crdt3": 0 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_create_transport", 00:15:41.610 "params": { 00:15:41.610 "trtype": "TCP", 00:15:41.610 "max_queue_depth": 128, 00:15:41.610 "max_io_qpairs_per_ctrlr": 127, 00:15:41.610 "in_capsule_data_size": 4096, 00:15:41.610 "max_io_size": 131072, 00:15:41.610 "io_unit_size": 131072, 00:15:41.610 "max_aq_depth": 128, 00:15:41.610 "num_shared_buffers": 511, 00:15:41.610 "buf_cache_size": 4294967295, 00:15:41.610 "dif_insert_or_strip": false, 00:15:41.610 "zcopy": false, 00:15:41.610 "c2h_success": false, 00:15:41.610 "sock_priority": 0, 00:15:41.610 "abort_timeout_sec": 1, 00:15:41.610 "ack_timeout": 0, 00:15:41.610 "data_wr_pool_size": 0 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_create_subsystem", 00:15:41.610 "params": { 00:15:41.610 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.610 "allow_any_host": false, 00:15:41.610 "serial_number": "00000000000000000000", 00:15:41.610 "model_number": "SPDK bdev Controller", 00:15:41.610 "max_namespaces": 32, 00:15:41.610 "min_cntlid": 1, 00:15:41.610 "max_cntlid": 65519, 00:15:41.610 "ana_reporting": false 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_subsystem_add_host", 00:15:41.610 "params": { 00:15:41.610 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.610 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.610 "psk": "key0" 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_subsystem_add_ns", 00:15:41.610 "params": { 00:15:41.610 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.610 "namespace": { 00:15:41.610 "nsid": 1, 00:15:41.610 "bdev_name": "malloc0", 00:15:41.610 "nguid": "5AD4C84A37684434902D83C27B9C2D71", 00:15:41.610 "uuid": "5ad4c84a-3768-4434-902d-83c27b9c2d71", 00:15:41.610 "no_auto_visible": false 00:15:41.610 } 00:15:41.610 } 00:15:41.610 }, 00:15:41.610 { 00:15:41.610 "method": "nvmf_subsystem_add_listener", 00:15:41.610 "params": { 00:15:41.610 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.610 "listen_address": { 00:15:41.610 "trtype": "TCP", 00:15:41.610 "adrfam": "IPv4", 00:15:41.610 "traddr": "10.0.0.2", 00:15:41.610 "trsvcid": "4420" 00:15:41.610 }, 00:15:41.610 "secure_channel": true 00:15:41.610 } 00:15:41.611 } 00:15:41.611 ] 00:15:41.611 } 00:15:41.611 ] 00:15:41.611 }' 00:15:41.611 02:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:41.868 02:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:15:41.868 "subsystems": [ 00:15:41.868 { 00:15:41.868 "subsystem": "keyring", 00:15:41.868 "config": [ 00:15:41.868 { 00:15:41.868 "method": "keyring_file_add_key", 00:15:41.868 "params": { 00:15:41.868 "name": "key0", 00:15:41.868 "path": "/tmp/tmp.L7UzFZjU1y" 00:15:41.868 } 00:15:41.868 } 00:15:41.868 ] 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "subsystem": "iobuf", 00:15:41.868 "config": [ 00:15:41.868 { 00:15:41.868 "method": "iobuf_set_options", 00:15:41.868 "params": { 00:15:41.868 "small_pool_count": 8192, 00:15:41.868 "large_pool_count": 1024, 00:15:41.868 "small_bufsize": 8192, 00:15:41.868 "large_bufsize": 135168 00:15:41.868 } 00:15:41.868 } 00:15:41.868 ] 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "subsystem": "sock", 00:15:41.868 "config": [ 00:15:41.868 { 00:15:41.868 "method": "sock_impl_set_options", 00:15:41.868 "params": { 00:15:41.868 "impl_name": "posix", 00:15:41.868 "recv_buf_size": 2097152, 00:15:41.868 "send_buf_size": 2097152, 00:15:41.868 "enable_recv_pipe": true, 00:15:41.868 "enable_quickack": false, 00:15:41.868 "enable_placement_id": 0, 00:15:41.868 "enable_zerocopy_send_server": true, 00:15:41.868 "enable_zerocopy_send_client": false, 00:15:41.868 "zerocopy_threshold": 0, 00:15:41.868 "tls_version": 0, 00:15:41.868 "enable_ktls": false 00:15:41.868 } 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "method": "sock_impl_set_options", 00:15:41.868 "params": { 00:15:41.868 "impl_name": "ssl", 00:15:41.868 "recv_buf_size": 4096, 00:15:41.868 "send_buf_size": 4096, 00:15:41.868 "enable_recv_pipe": true, 00:15:41.868 "enable_quickack": false, 00:15:41.868 "enable_placement_id": 0, 00:15:41.868 "enable_zerocopy_send_server": true, 00:15:41.868 "enable_zerocopy_send_client": false, 00:15:41.868 "zerocopy_threshold": 0, 00:15:41.868 "tls_version": 0, 00:15:41.868 "enable_ktls": false 00:15:41.868 } 00:15:41.868 } 00:15:41.868 ] 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "subsystem": "vmd", 00:15:41.868 "config": [] 00:15:41.868 }, 00:15:41.868 { 00:15:41.868 "subsystem": "accel", 00:15:41.868 "config": [ 00:15:41.868 { 00:15:41.869 "method": "accel_set_options", 00:15:41.869 "params": { 00:15:41.869 "small_cache_size": 128, 00:15:41.869 "large_cache_size": 16, 00:15:41.869 "task_count": 2048, 00:15:41.869 "sequence_count": 2048, 00:15:41.869 "buf_count": 2048 00:15:41.869 } 00:15:41.869 } 00:15:41.869 ] 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "subsystem": "bdev", 00:15:41.869 "config": [ 00:15:41.869 { 00:15:41.869 "method": "bdev_set_options", 00:15:41.869 "params": { 00:15:41.869 "bdev_io_pool_size": 65535, 00:15:41.869 "bdev_io_cache_size": 256, 00:15:41.869 "bdev_auto_examine": true, 00:15:41.869 "iobuf_small_cache_size": 128, 00:15:41.869 "iobuf_large_cache_size": 16 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_raid_set_options", 00:15:41.869 "params": { 00:15:41.869 "process_window_size_kb": 1024 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_iscsi_set_options", 00:15:41.869 "params": { 00:15:41.869 "timeout_sec": 30 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_nvme_set_options", 00:15:41.869 "params": { 00:15:41.869 "action_on_timeout": "none", 00:15:41.869 "timeout_us": 0, 00:15:41.869 "timeout_admin_us": 0, 00:15:41.869 "keep_alive_timeout_ms": 10000, 00:15:41.869 "arbitration_burst": 0, 00:15:41.869 "low_priority_weight": 0, 00:15:41.869 "medium_priority_weight": 0, 00:15:41.869 "high_priority_weight": 0, 00:15:41.869 "nvme_adminq_poll_period_us": 10000, 00:15:41.869 "nvme_ioq_poll_period_us": 0, 00:15:41.869 "io_queue_requests": 512, 00:15:41.869 "delay_cmd_submit": true, 00:15:41.869 "transport_retry_count": 4, 00:15:41.869 "bdev_retry_count": 3, 00:15:41.869 "transport_ack_timeout": 0, 00:15:41.869 "ctrlr_loss_timeout_sec": 0, 00:15:41.869 "reconnect_delay_sec": 0, 00:15:41.869 "fast_io_fail_timeout_sec": 0, 00:15:41.869 "disable_auto_failback": false, 00:15:41.869 "generate_uuids": false, 00:15:41.869 "transport_tos": 0, 00:15:41.869 "nvme_error_stat": false, 00:15:41.869 "rdma_srq_size": 0, 00:15:41.869 "io_path_stat": false, 00:15:41.869 "allow_accel_sequence": false, 00:15:41.869 "rdma_max_cq_size": 0, 00:15:41.869 "rdma_cm_event_timeout_ms": 0, 00:15:41.869 "dhchap_digests": [ 00:15:41.869 "sha256", 00:15:41.869 "sha384", 00:15:41.869 "sha512" 00:15:41.869 ], 00:15:41.869 "dhchap_dhgroups": [ 00:15:41.869 "null", 00:15:41.869 "ffdhe2048", 00:15:41.869 "ffdhe3072", 00:15:41.869 "ffdhe4096", 00:15:41.869 "ffdhe6144", 00:15:41.869 "ffdhe8192" 00:15:41.869 ] 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_nvme_attach_controller", 00:15:41.869 "params": { 00:15:41.869 "name": "nvme0", 00:15:41.869 "trtype": "TCP", 00:15:41.869 "adrfam": "IPv4", 00:15:41.869 "traddr": "10.0.0.2", 00:15:41.869 "trsvcid": "4420", 00:15:41.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.869 "prchk_reftag": false, 00:15:41.869 "prchk_guard": false, 00:15:41.869 "ctrlr_loss_timeout_sec": 0, 00:15:41.869 "reconnect_delay_sec": 0, 00:15:41.869 "fast_io_fail_timeout_sec": 0, 00:15:41.869 "psk": "key0", 00:15:41.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.869 "hdgst": false, 00:15:41.869 "ddgst": false 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_nvme_set_hotplug", 00:15:41.869 "params": { 00:15:41.869 "period_us": 100000, 00:15:41.869 "enable": false 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_enable_histogram", 00:15:41.869 "params": { 00:15:41.869 "name": "nvme0n1", 00:15:41.869 "enable": true 00:15:41.869 } 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "method": "bdev_wait_for_examine" 00:15:41.869 } 00:15:41.869 ] 00:15:41.869 }, 00:15:41.869 { 00:15:41.869 "subsystem": "nbd", 00:15:41.869 "config": [] 00:15:41.869 } 00:15:41.869 ] 00:15:41.869 }' 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2317320 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2317320 ']' 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2317320 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2317320 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2317320' 00:15:41.869 killing process with pid 2317320 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2317320 00:15:41.869 Received shutdown signal, test time was about 1.000000 seconds 00:15:41.869 00:15:41.869 Latency(us) 00:15:41.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.869 =================================================================================================================== 00:15:41.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.869 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2317320 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2317170 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2317170 ']' 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2317170 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2317170 00:15:42.126 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:42.127 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:42.127 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2317170' 00:15:42.127 killing process with pid 2317170 00:15:42.127 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2317170 00:15:42.127 [2024-05-15 02:32:29.520321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:42.127 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2317170 00:15:42.691 02:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:42.691 02:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.691 02:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:15:42.691 "subsystems": [ 00:15:42.691 { 00:15:42.691 "subsystem": "keyring", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "keyring_file_add_key", 00:15:42.691 "params": { 00:15:42.691 "name": "key0", 00:15:42.691 "path": "/tmp/tmp.L7UzFZjU1y" 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "iobuf", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "iobuf_set_options", 00:15:42.691 "params": { 00:15:42.691 "small_pool_count": 8192, 00:15:42.691 "large_pool_count": 1024, 00:15:42.691 "small_bufsize": 8192, 00:15:42.691 "large_bufsize": 135168 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "sock", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "sock_impl_set_options", 00:15:42.691 "params": { 00:15:42.691 "impl_name": "posix", 00:15:42.691 "recv_buf_size": 2097152, 00:15:42.691 "send_buf_size": 2097152, 00:15:42.691 "enable_recv_pipe": true, 00:15:42.691 "enable_quickack": false, 00:15:42.691 "enable_placement_id": 0, 00:15:42.691 "enable_zerocopy_send_server": true, 00:15:42.691 "enable_zerocopy_send_client": false, 00:15:42.691 "zerocopy_threshold": 0, 00:15:42.691 "tls_version": 0, 00:15:42.691 "enable_ktls": false 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "sock_impl_set_options", 00:15:42.691 "params": { 00:15:42.691 "impl_name": "ssl", 00:15:42.691 "recv_buf_size": 4096, 00:15:42.691 "send_buf_size": 4096, 00:15:42.691 "enable_recv_pipe": true, 00:15:42.691 "enable_quickack": false, 00:15:42.691 "enable_placement_id": 0, 00:15:42.691 "enable_zerocopy_send_server": true, 00:15:42.691 "enable_zerocopy_send_client": false, 00:15:42.691 "zerocopy_threshold": 0, 00:15:42.691 "tls_version": 0, 00:15:42.691 "enable_ktls": false 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "vmd", 00:15:42.691 "config": [] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "accel", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "accel_set_options", 00:15:42.691 "params": { 00:15:42.691 "small_cache_size": 128, 00:15:42.691 "large_cache_size": 16, 00:15:42.691 "task_count": 2048, 00:15:42.691 "sequence_count": 2048, 00:15:42.691 "buf_count": 2048 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "bdev", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "bdev_set_options", 00:15:42.691 "params": { 00:15:42.691 "bdev_io_pool_size": 65535, 00:15:42.691 "bdev_io_cache_size": 256, 00:15:42.691 "bdev_auto_examine": true, 00:15:42.691 "iobuf_small_cache_size": 128, 00:15:42.691 "iobuf_large_cache_size": 16 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_raid_set_options", 00:15:42.691 "params": { 00:15:42.691 "process_window_size_kb": 1024 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_iscsi_set_options", 00:15:42.691 "params": { 00:15:42.691 "timeout_sec": 30 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_nvme_set_options", 00:15:42.691 "params": { 00:15:42.691 "action_on_timeout": "none", 00:15:42.691 "timeout_us": 0, 00:15:42.691 "timeout_admin_us": 0, 00:15:42.691 "keep_alive_timeout_ms": 10000, 00:15:42.691 "arbitration_burst": 0, 00:15:42.691 "low_priority_weight": 0, 00:15:42.691 "medium_priority_weight": 0, 00:15:42.691 "high_priority_weight": 0, 00:15:42.691 "nvme_adminq_poll_period_us": 10000, 00:15:42.691 "nvme_ioq_poll_period_us": 0, 00:15:42.691 "io_queue_requests": 0, 00:15:42.691 "delay_cmd_submit": true, 00:15:42.691 "transport_retry_count": 4, 00:15:42.691 "bdev_retry_count": 3, 00:15:42.691 "transport_ack_timeout": 0, 00:15:42.691 "ctrlr_loss_timeout_sec": 0, 00:15:42.691 "reconnect_delay_sec": 0, 00:15:42.691 "fast_io_fail_timeout_sec": 0, 00:15:42.691 "disable_auto_failback": false, 00:15:42.691 "generate_uuids": false, 00:15:42.691 "transport_tos": 0, 00:15:42.691 "nvme_error_stat": false, 00:15:42.691 "rdma_srq_size": 0, 00:15:42.691 "io_path_stat": false, 00:15:42.691 "allow_accel_sequence": false, 00:15:42.691 "rdma_max_cq_size": 0, 00:15:42.691 "rdma_cm_event_timeout_ms": 0, 00:15:42.691 "dhchap_digests": [ 00:15:42.691 "sha256", 00:15:42.691 "sha384", 00:15:42.691 "sha512" 00:15:42.691 ], 00:15:42.691 "dhchap_dhgroups": [ 00:15:42.691 "null", 00:15:42.691 "ffdhe2048", 00:15:42.691 "ffdhe3072", 00:15:42.691 "ffdhe4096", 00:15:42.691 "ffdhe6144", 00:15:42.691 "ffdhe8192" 00:15:42.691 ] 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_nvme_set_hotplug", 00:15:42.691 "params": { 00:15:42.691 "period_us": 100000, 00:15:42.691 "enable": false 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_malloc_create", 00:15:42.691 "params": { 00:15:42.691 "name": "malloc0", 00:15:42.691 "num_blocks": 8192, 00:15:42.691 "block_size": 4096, 00:15:42.691 "physical_block_size": 4096, 00:15:42.691 "uuid": "5ad4c84a-3768-4434-902d-83c27b9c2d71", 00:15:42.691 "optimal_io_boundary": 0 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_wait_for_examine" 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "nbd", 00:15:42.691 "config": [] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "scheduler", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "framework_set_scheduler", 00:15:42.691 "params": { 00:15:42.691 "name": "static" 00:15:42.691 } 00:15:42.692 } 00:15:42.692 ] 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "subsystem": "nvmf", 00:15:42.692 "config": [ 00:15:42.692 { 00:15:42.692 "method": "nvmf_set_config", 00:15:42.692 "params": { 00:15:42.692 "discovery_filter": "match_any", 00:15:42.692 "admin_cmd_passthru": { 00:15:42.692 "identify_ctrlr": false 00:15:42.692 } 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_set_max_subsystems", 00:15:42.692 "params": { 00:15:42.692 "max_subsystems": 1024 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_set_crdt", 00:15:42.692 "params": { 00:15:42.692 "crdt1": 0, 00:15:42.692 "crdt2": 0, 00:15:42.692 "crdt3": 0 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_create_transport", 00:15:42.692 "params": { 00:15:42.692 "trtype": "TCP", 00:15:42.692 "max_queue_depth": 128, 00:15:42.692 "max_io_qpairs_per_ctrlr": 127, 00:15:42.692 "in_capsule_data_size": 4096, 00:15:42.692 "max_io_size": 131072, 00:15:42.692 "io_unit_size": 131072, 00:15:42.692 "max_aq_depth": 128, 00:15:42.692 "num_shared_buffers": 511, 00:15:42.692 "buf_cache_size": 4294967295, 00:15:42.692 "dif_insert_or_strip": false, 00:15:42.692 "zcopy": false, 00:15:42.692 "c2h_success": false, 00:15:42.692 "sock_priority": 0, 00:15:42.692 "abort_timeout_sec": 1, 00:15:42.692 "ack_timeout": 0, 00:15:42.692 "data_wr_pool_size": 0 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_create_subsystem", 00:15:42.692 "params": { 00:15:42.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.692 "allow_any_host": false, 00:15:42.692 "serial_number": "00000000000000000000", 00:15:42.692 "model_number": "SPDK bdev Controller", 00:15:42.692 "max_namespaces": 32, 00:15:42.692 "min_cntlid": 1, 00:15:42.692 "max_cntlid": 65519, 00:15:42.692 "ana_reporting": false 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_subsystem_add_host", 00:15:42.692 "params": { 00:15:42.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.692 "host": "nqn.2016-06.io.spdk:host1", 00:15:42.692 "psk": "key0" 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_subsystem_add_ns", 00:15:42.692 "params": { 00:15:42.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.692 "namespace": { 00:15:42.692 "nsid": 1, 00:15:42.692 "bdev_name": "malloc0", 00:15:42.692 "nguid": "5AD4C84A37684434902D83C27B9C2D71", 00:15:42.692 "uuid": "5ad4c84a-3768-4434-902d-83c27b9c2d71", 00:15:42.692 "no_auto_visible": false 00:15:42.692 } 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "nvmf_subsystem_add_listener", 00:15:42.692 "params": { 00:15:42.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.692 "listen_address": { 00:15:42.692 "trtype": "TCP", 00:15:42.692 "adrfam": "IPv4", 00:15:42.692 "traddr": "10.0.0.2", 00:15:42.692 "trsvcid": "4420" 00:15:42.692 }, 00:15:42.692 "secure_channel": true 00:15:42.692 } 00:15:42.692 } 00:15:42.692 ] 00:15:42.692 } 00:15:42.692 ] 00:15:42.692 }' 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2317727 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2317727 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2317727 ']' 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:42.692 02:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.692 [2024-05-15 02:32:29.869511] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:42.692 [2024-05-15 02:32:29.869586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.692 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.692 [2024-05-15 02:32:29.946724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.692 [2024-05-15 02:32:30.061404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.692 [2024-05-15 02:32:30.061466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.692 [2024-05-15 02:32:30.061495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.692 [2024-05-15 02:32:30.061508] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.692 [2024-05-15 02:32:30.061518] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.692 [2024-05-15 02:32:30.061601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.949 [2024-05-15 02:32:30.298737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.949 [2024-05-15 02:32:30.330698] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:42.949 [2024-05-15 02:32:30.330780] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:42.949 [2024-05-15 02:32:30.339082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2317880 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2317880 /var/tmp/bdevperf.sock 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2317880 ']' 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.513 02:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:43.513 "subsystems": [ 00:15:43.513 { 00:15:43.513 "subsystem": "keyring", 00:15:43.513 "config": [ 00:15:43.513 { 00:15:43.513 "method": "keyring_file_add_key", 00:15:43.513 "params": { 00:15:43.513 "name": "key0", 00:15:43.513 "path": "/tmp/tmp.L7UzFZjU1y" 00:15:43.513 } 00:15:43.513 } 00:15:43.513 ] 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "subsystem": "iobuf", 00:15:43.513 "config": [ 00:15:43.513 { 00:15:43.513 "method": "iobuf_set_options", 00:15:43.513 "params": { 00:15:43.513 "small_pool_count": 8192, 00:15:43.513 "large_pool_count": 1024, 00:15:43.513 "small_bufsize": 8192, 00:15:43.513 "large_bufsize": 135168 00:15:43.513 } 00:15:43.513 } 00:15:43.513 ] 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "subsystem": "sock", 00:15:43.513 "config": [ 00:15:43.513 { 00:15:43.513 "method": "sock_impl_set_options", 00:15:43.513 "params": { 00:15:43.513 "impl_name": "posix", 00:15:43.513 "recv_buf_size": 2097152, 00:15:43.513 "send_buf_size": 2097152, 00:15:43.513 "enable_recv_pipe": true, 00:15:43.513 "enable_quickack": false, 00:15:43.513 "enable_placement_id": 0, 00:15:43.513 "enable_zerocopy_send_server": true, 00:15:43.513 "enable_zerocopy_send_client": false, 00:15:43.513 "zerocopy_threshold": 0, 00:15:43.513 "tls_version": 0, 00:15:43.513 "enable_ktls": false 00:15:43.513 } 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "method": "sock_impl_set_options", 00:15:43.513 "params": { 00:15:43.513 "impl_name": "ssl", 00:15:43.513 "recv_buf_size": 4096, 00:15:43.513 "send_buf_size": 4096, 00:15:43.513 "enable_recv_pipe": true, 00:15:43.513 "enable_quickack": false, 00:15:43.513 "enable_placement_id": 0, 00:15:43.513 "enable_zerocopy_send_server": true, 00:15:43.513 "enable_zerocopy_send_client": false, 00:15:43.513 "zerocopy_threshold": 0, 00:15:43.513 "tls_version": 0, 00:15:43.513 "enable_ktls": false 00:15:43.513 } 00:15:43.513 } 00:15:43.513 ] 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "subsystem": "vmd", 00:15:43.513 "config": [] 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "subsystem": "accel", 00:15:43.513 "config": [ 00:15:43.513 { 00:15:43.513 "method": "accel_set_options", 00:15:43.513 "params": { 00:15:43.513 "small_cache_size": 128, 00:15:43.513 "large_cache_size": 16, 00:15:43.513 "task_count": 2048, 00:15:43.513 "sequence_count": 2048, 00:15:43.513 "buf_count": 2048 00:15:43.513 } 00:15:43.513 } 00:15:43.513 ] 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "subsystem": "bdev", 00:15:43.513 "config": [ 00:15:43.513 { 00:15:43.513 "method": "bdev_set_options", 00:15:43.513 "params": { 00:15:43.513 "bdev_io_pool_size": 65535, 00:15:43.513 "bdev_io_cache_size": 256, 00:15:43.513 "bdev_auto_examine": true, 00:15:43.513 "iobuf_small_cache_size": 128, 00:15:43.513 "iobuf_large_cache_size": 16 00:15:43.513 } 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "method": "bdev_raid_set_options", 00:15:43.513 "params": { 00:15:43.513 "process_window_size_kb": 1024 00:15:43.513 } 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "method": "bdev_iscsi_set_options", 00:15:43.513 "params": { 00:15:43.513 "timeout_sec": 30 00:15:43.513 } 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "method": "bdev_nvme_set_options", 00:15:43.513 "params": { 00:15:43.513 "action_on_timeout": "none", 00:15:43.513 "timeout_us": 0, 00:15:43.513 "timeout_admin_us": 0, 00:15:43.513 "keep_alive_timeout_ms": 10000, 00:15:43.513 "arbitration_burst": 0, 00:15:43.513 "low_priority_weight": 0, 00:15:43.513 "medium_priority_weight": 0, 00:15:43.513 "high_priority_weight": 0, 00:15:43.513 "nvme_adminq_poll_period_us": 10000, 00:15:43.513 "nvme_ioq_poll_period_us": 0, 00:15:43.513 "io_queue_requests": 512, 00:15:43.513 "delay_cmd_submit": true, 00:15:43.513 "transport_retry_count": 4, 00:15:43.513 "bdev_retry_count": 3, 00:15:43.513 "transport_ack_timeout": 0, 00:15:43.513 "ctrlr_loss_timeout_sec": 0, 00:15:43.513 "reconnect_delay_sec": 0, 00:15:43.513 "fast_io_fail_timeout_sec": 0, 00:15:43.513 "disable_auto_failback": false, 00:15:43.513 "generate_uuids": false, 00:15:43.513 "transport_tos": 0, 00:15:43.513 "nvme_error_stat": false, 00:15:43.513 "rdma_srq_size": 0, 00:15:43.513 "io_path_stat": false, 00:15:43.513 "allow_accel_sequence": false, 00:15:43.513 "rdma_max_cq_size": 0, 00:15:43.513 "rdma_cm_event_timeout_ms": 0, 00:15:43.513 "dhchap_digests": [ 00:15:43.513 "sha256", 00:15:43.513 "sha384", 00:15:43.513 "sha512" 00:15:43.513 ], 00:15:43.513 "dhchap_dhgroups": [ 00:15:43.513 "null", 00:15:43.513 "ffdhe2048", 00:15:43.513 "ffdhe3072", 00:15:43.513 "ffdhe4096", 00:15:43.513 "ffdhe6144", 00:15:43.513 "ffdhe8192" 00:15:43.513 ] 00:15:43.513 } 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "method": "bdev_nvme_attach_controller", 00:15:43.513 "params": { 00:15:43.513 "name": "nvme0", 00:15:43.513 "trtype": "TCP", 00:15:43.513 "adrfam": "IPv4", 00:15:43.513 "traddr": "10.0.0.2", 00:15:43.513 "trsvcid": "4420", 00:15:43.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.513 "prchk_reftag": false, 00:15:43.513 "prchk_guard": false, 00:15:43.513 "ctrlr_loss_timeout_sec": 0, 00:15:43.513 "reconnect_delay_sec": 0, 00:15:43.513 "fast_io_fail_timeout_sec": 0, 00:15:43.513 "psk": "key0", 00:15:43.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.513 "hdgst": false, 00:15:43.513 "ddgst": false 00:15:43.513 } 00:15:43.513 }, 00:15:43.513 { 00:15:43.513 "method": "bdev_nvme_set_hotplug", 00:15:43.513 "params": { 00:15:43.513 "period_us": 100000, 00:15:43.513 "enable": false 00:15:43.513 } 00:15:43.513 }, 00:15:43.514 { 00:15:43.514 "method": "bdev_enable_histogram", 00:15:43.514 "params": { 00:15:43.514 "name": "nvme0n1", 00:15:43.514 "enable": true 00:15:43.514 } 00:15:43.514 }, 00:15:43.514 { 00:15:43.514 "method": "bdev_wait_for_examine" 00:15:43.514 } 00:15:43.514 ] 00:15:43.514 }, 00:15:43.514 { 00:15:43.514 "subsystem": "nbd", 00:15:43.514 "config": [] 00:15:43.514 } 00:15:43.514 ] 00:15:43.514 }' 00:15:43.514 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:43.514 02:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.514 [2024-05-15 02:32:30.887550] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:43.514 [2024-05-15 02:32:30.887626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317880 ] 00:15:43.514 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.771 [2024-05-15 02:32:30.961890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.771 [2024-05-15 02:32:31.079605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.029 [2024-05-15 02:32:31.258190] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:44.594 02:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:44.594 02:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:44.594 02:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:44.594 02:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:44.852 02:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.852 02:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.852 Running I/O for 1 seconds... 00:15:46.233 00:15:46.233 Latency(us) 00:15:46.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.233 Verification LBA range: start 0x0 length 0x2000 00:15:46.233 nvme0n1 : 1.08 1285.42 5.02 0.00 0.00 96561.88 8495.41 129712.73 00:15:46.233 =================================================================================================================== 00:15:46.233 Total : 1285.42 5.02 0.00 0.00 96561.88 8495.41 129712.73 00:15:46.233 0 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:46.233 nvmf_trace.0 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2317880 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2317880 ']' 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2317880 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2317880 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2317880' 00:15:46.233 killing process with pid 2317880 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2317880 00:15:46.233 Received shutdown signal, test time was about 1.000000 seconds 00:15:46.233 00:15:46.233 Latency(us) 00:15:46.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.233 =================================================================================================================== 00:15:46.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2317880 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.233 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.233 rmmod nvme_tcp 00:15:46.490 rmmod nvme_fabrics 00:15:46.490 rmmod nvme_keyring 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2317727 ']' 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2317727 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2317727 ']' 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2317727 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2317727 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2317727' 00:15:46.490 killing process with pid 2317727 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2317727 00:15:46.490 [2024-05-15 02:32:33.711561] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:46.490 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2317727 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.749 02:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.646 02:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.646 02:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WV7Cspjzv4 /tmp/tmp.RrxLDoggTs /tmp/tmp.L7UzFZjU1y 00:15:48.646 00:15:48.646 real 1m24.378s 00:15:48.646 user 2m13.461s 00:15:48.646 sys 0m28.571s 00:15:48.646 02:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:48.646 02:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.646 ************************************ 00:15:48.646 END TEST nvmf_tls 00:15:48.646 ************************************ 00:15:48.905 02:32:36 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:48.905 02:32:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:48.905 02:32:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:48.905 02:32:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.905 ************************************ 00:15:48.905 START TEST nvmf_fips 00:15:48.905 ************************************ 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:48.905 * Looking for test storage... 00:15:48.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.905 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:48.906 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:48.907 Error setting digest 00:15:48.907 009243A5DB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:48.907 009243A5DB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.907 02:32:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:51.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:51.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:51.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:51.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.435 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:15:51.435 00:15:51.435 --- 10.0.0.2 ping statistics --- 00:15:51.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.435 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:51.436 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:15:51.436 00:15:51.436 --- 10.0.0.1 ping statistics --- 00:15:51.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.436 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2320533 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2320533 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2320533 ']' 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:51.693 02:32:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:51.693 [2024-05-15 02:32:38.945266] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:51.693 [2024-05-15 02:32:38.945367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.693 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.693 [2024-05-15 02:32:39.022339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.951 [2024-05-15 02:32:39.135095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.951 [2024-05-15 02:32:39.135143] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.951 [2024-05-15 02:32:39.135172] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.951 [2024-05-15 02:32:39.135183] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.951 [2024-05-15 02:32:39.135194] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.951 [2024-05-15 02:32:39.135247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:52.517 02:32:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.774 [2024-05-15 02:32:40.180572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.032 [2024-05-15 02:32:40.196546] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:53.032 [2024-05-15 02:32:40.196625] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:53.032 [2024-05-15 02:32:40.196841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.032 [2024-05-15 02:32:40.229066] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:53.032 malloc0 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2320701 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2320701 /var/tmp/bdevperf.sock 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2320701 ']' 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:53.032 02:32:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:53.032 [2024-05-15 02:32:40.321519] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:15:53.032 [2024-05-15 02:32:40.321601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320701 ] 00:15:53.032 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.032 [2024-05-15 02:32:40.391135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.289 [2024-05-15 02:32:40.499373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.222 02:32:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:54.222 02:32:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:54.222 02:32:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:54.222 [2024-05-15 02:32:41.505943] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:54.222 [2024-05-15 02:32:41.506090] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:54.222 TLSTESTn1 00:15:54.222 02:32:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:54.479 Running I/O for 10 seconds... 00:16:04.481 00:16:04.481 Latency(us) 00:16:04.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.481 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:04.481 Verification LBA range: start 0x0 length 0x2000 00:16:04.481 TLSTESTn1 : 10.10 1035.72 4.05 0.00 0.00 123122.21 6213.78 114955.00 00:16:04.481 =================================================================================================================== 00:16:04.481 Total : 1035.72 4.05 0.00 0.00 123122.21 6213.78 114955.00 00:16:04.481 0 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:04.481 nvmf_trace.0 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2320701 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2320701 ']' 00:16:04.481 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2320701 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2320701 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2320701' 00:16:04.739 killing process with pid 2320701 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2320701 00:16:04.739 Received shutdown signal, test time was about 10.000000 seconds 00:16:04.739 00:16:04.739 Latency(us) 00:16:04.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.739 =================================================================================================================== 00:16:04.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.739 [2024-05-15 02:32:51.928456] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:04.739 02:32:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2320701 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.996 rmmod nvme_tcp 00:16:04.996 rmmod nvme_fabrics 00:16:04.996 rmmod nvme_keyring 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2320533 ']' 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2320533 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2320533 ']' 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2320533 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2320533 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2320533' 00:16:04.996 killing process with pid 2320533 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2320533 00:16:04.996 [2024-05-15 02:32:52.290143] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:04.996 [2024-05-15 02:32:52.290188] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:04.996 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2320533 00:16:05.253 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.253 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.254 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.254 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.254 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.254 02:32:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.254 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.254 02:32:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.787 02:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.787 02:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:07.787 00:16:07.787 real 0m18.530s 00:16:07.787 user 0m15.981s 00:16:07.787 sys 0m6.963s 00:16:07.787 02:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:07.787 02:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:07.787 ************************************ 00:16:07.787 END TEST nvmf_fips 00:16:07.787 ************************************ 00:16:07.787 02:32:54 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:07.787 02:32:54 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:16:07.787 02:32:54 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:16:07.787 02:32:54 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:16:07.787 02:32:54 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.787 02:32:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:09.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:09.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.684 02:32:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:09.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:09.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:16:09.685 02:32:57 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:09.685 02:32:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:09.685 02:32:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:09.685 02:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.685 ************************************ 00:16:09.685 START TEST nvmf_perf_adq 00:16:09.685 ************************************ 00:16:09.685 02:32:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:09.943 * Looking for test storage... 00:16:09.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.943 02:32:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.944 02:32:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:12.475 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:12.475 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:12.475 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:12.476 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:12.476 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:16:12.476 02:32:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:16:13.041 02:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:16:14.412 02:33:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.680 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:19.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:19.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:19.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:19.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.681 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:16:19.682 00:16:19.682 --- 10.0.0.2 ping statistics --- 00:16:19.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.682 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:19.682 00:16:19.682 --- 10.0.0.1 ping statistics --- 00:16:19.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.682 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2327268 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2327268 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2327268 ']' 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:19.682 02:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:19.682 [2024-05-15 02:33:06.900817] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:16:19.682 [2024-05-15 02:33:06.900895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.682 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.682 [2024-05-15 02:33:06.982459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.939 [2024-05-15 02:33:07.102608] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.939 [2024-05-15 02:33:07.102653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.939 [2024-05-15 02:33:07.102682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.939 [2024-05-15 02:33:07.102693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.939 [2024-05-15 02:33:07.102703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.939 [2024-05-15 02:33:07.102782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.939 [2024-05-15 02:33:07.102845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.939 [2024-05-15 02:33:07.102916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.939 [2024-05-15 02:33:07.102919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:16:20.503 02:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:16:20.504 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.504 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 [2024-05-15 02:33:08.073705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 Malloc1 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 [2024-05-15 02:33:08.125281] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:20.762 [2024-05-15 02:33:08.125607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2327807 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:16:20.762 02:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:20.762 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.319 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:23.319 02:33:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.319 02:33:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.319 02:33:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.319 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:16:23.319 "tick_rate": 2700000000, 00:16:23.319 "poll_groups": [ 00:16:23.319 { 00:16:23.319 "name": "nvmf_tgt_poll_group_000", 00:16:23.319 "admin_qpairs": 1, 00:16:23.319 "io_qpairs": 1, 00:16:23.319 "current_admin_qpairs": 1, 00:16:23.319 "current_io_qpairs": 1, 00:16:23.319 "pending_bdev_io": 0, 00:16:23.319 "completed_nvme_io": 20656, 00:16:23.319 "transports": [ 00:16:23.319 { 00:16:23.319 "trtype": "TCP" 00:16:23.319 } 00:16:23.319 ] 00:16:23.319 }, 00:16:23.319 { 00:16:23.319 "name": "nvmf_tgt_poll_group_001", 00:16:23.319 "admin_qpairs": 0, 00:16:23.319 "io_qpairs": 1, 00:16:23.319 "current_admin_qpairs": 0, 00:16:23.319 "current_io_qpairs": 1, 00:16:23.319 "pending_bdev_io": 0, 00:16:23.319 "completed_nvme_io": 21013, 00:16:23.319 "transports": [ 00:16:23.319 { 00:16:23.319 "trtype": "TCP" 00:16:23.319 } 00:16:23.319 ] 00:16:23.319 }, 00:16:23.319 { 00:16:23.319 "name": "nvmf_tgt_poll_group_002", 00:16:23.319 "admin_qpairs": 0, 00:16:23.319 "io_qpairs": 1, 00:16:23.319 "current_admin_qpairs": 0, 00:16:23.319 "current_io_qpairs": 1, 00:16:23.319 "pending_bdev_io": 0, 00:16:23.320 "completed_nvme_io": 17569, 00:16:23.320 "transports": [ 00:16:23.320 { 00:16:23.320 "trtype": "TCP" 00:16:23.320 } 00:16:23.320 ] 00:16:23.320 }, 00:16:23.320 { 00:16:23.320 "name": "nvmf_tgt_poll_group_003", 00:16:23.320 "admin_qpairs": 0, 00:16:23.320 "io_qpairs": 1, 00:16:23.320 "current_admin_qpairs": 0, 00:16:23.320 "current_io_qpairs": 1, 00:16:23.320 "pending_bdev_io": 0, 00:16:23.320 "completed_nvme_io": 14850, 00:16:23.320 "transports": [ 00:16:23.320 { 00:16:23.320 "trtype": "TCP" 00:16:23.320 } 00:16:23.320 ] 00:16:23.320 } 00:16:23.320 ] 00:16:23.320 }' 00:16:23.320 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:16:23.320 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:16:23.320 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:16:23.320 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:16:23.320 02:33:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2327807 00:16:31.428 Initializing NVMe Controllers 00:16:31.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:31.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:31.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:31.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:31.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:31.428 Initialization complete. Launching workers. 00:16:31.428 ======================================================== 00:16:31.428 Latency(us) 00:16:31.428 Device Information : IOPS MiB/s Average min max 00:16:31.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9241.60 36.10 6927.75 4513.63 10183.53 00:16:31.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11022.19 43.06 5806.84 1813.97 8778.47 00:16:31.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7823.10 30.56 8180.57 2546.26 13940.89 00:16:31.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10870.89 42.46 5888.92 2011.36 9937.36 00:16:31.428 ======================================================== 00:16:31.428 Total : 38957.78 152.18 6572.31 1813.97 13940.89 00:16:31.428 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.428 rmmod nvme_tcp 00:16:31.428 rmmod nvme_fabrics 00:16:31.428 rmmod nvme_keyring 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2327268 ']' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2327268 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2327268 ']' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2327268 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2327268 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2327268' 00:16:31.428 killing process with pid 2327268 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2327268 00:16:31.428 [2024-05-15 02:33:18.350561] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2327268 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.428 02:33:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.332 02:33:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.332 02:33:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:16:33.332 02:33:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:16:33.900 02:33:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:16:35.798 02:33:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.067 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:16:41.068 00:16:41.068 --- 10.0.0.2 ping statistics --- 00:16:41.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.068 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:16:41.068 00:16:41.068 --- 10.0.0.1 ping statistics --- 00:16:41.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.068 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:41.068 net.core.busy_poll = 1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:41.068 net.core.busy_read = 1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2330549 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2330549 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2330549 ']' 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:41.068 02:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.068 [2024-05-15 02:33:28.032972] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:16:41.069 [2024-05-15 02:33:28.033053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.069 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.069 [2024-05-15 02:33:28.110305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.069 [2024-05-15 02:33:28.216907] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.069 [2024-05-15 02:33:28.216979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.069 [2024-05-15 02:33:28.216994] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.069 [2024-05-15 02:33:28.217005] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.069 [2024-05-15 02:33:28.217015] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.069 [2024-05-15 02:33:28.217063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.069 [2024-05-15 02:33:28.217123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.069 [2024-05-15 02:33:28.217204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.069 [2024-05-15 02:33:28.217207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 [2024-05-15 02:33:28.409519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 Malloc1 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 [2024-05-15 02:33:28.460026] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:41.069 [2024-05-15 02:33:28.460353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2330590 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:41.069 02:33:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:16:41.327 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:16:43.228 "tick_rate": 2700000000, 00:16:43.228 "poll_groups": [ 00:16:43.228 { 00:16:43.228 "name": "nvmf_tgt_poll_group_000", 00:16:43.228 "admin_qpairs": 1, 00:16:43.228 "io_qpairs": 0, 00:16:43.228 "current_admin_qpairs": 1, 00:16:43.228 "current_io_qpairs": 0, 00:16:43.228 "pending_bdev_io": 0, 00:16:43.228 "completed_nvme_io": 0, 00:16:43.228 "transports": [ 00:16:43.228 { 00:16:43.228 "trtype": "TCP" 00:16:43.228 } 00:16:43.228 ] 00:16:43.228 }, 00:16:43.228 { 00:16:43.228 "name": "nvmf_tgt_poll_group_001", 00:16:43.228 "admin_qpairs": 0, 00:16:43.228 "io_qpairs": 4, 00:16:43.228 "current_admin_qpairs": 0, 00:16:43.228 "current_io_qpairs": 4, 00:16:43.228 "pending_bdev_io": 0, 00:16:43.228 "completed_nvme_io": 35095, 00:16:43.228 "transports": [ 00:16:43.228 { 00:16:43.228 "trtype": "TCP" 00:16:43.228 } 00:16:43.228 ] 00:16:43.228 }, 00:16:43.228 { 00:16:43.228 "name": "nvmf_tgt_poll_group_002", 00:16:43.228 "admin_qpairs": 0, 00:16:43.228 "io_qpairs": 0, 00:16:43.228 "current_admin_qpairs": 0, 00:16:43.228 "current_io_qpairs": 0, 00:16:43.228 "pending_bdev_io": 0, 00:16:43.228 "completed_nvme_io": 0, 00:16:43.228 "transports": [ 00:16:43.228 { 00:16:43.228 "trtype": "TCP" 00:16:43.228 } 00:16:43.228 ] 00:16:43.228 }, 00:16:43.228 { 00:16:43.228 "name": "nvmf_tgt_poll_group_003", 00:16:43.228 "admin_qpairs": 0, 00:16:43.228 "io_qpairs": 0, 00:16:43.228 "current_admin_qpairs": 0, 00:16:43.228 "current_io_qpairs": 0, 00:16:43.228 "pending_bdev_io": 0, 00:16:43.228 "completed_nvme_io": 0, 00:16:43.228 "transports": [ 00:16:43.228 { 00:16:43.228 "trtype": "TCP" 00:16:43.228 } 00:16:43.228 ] 00:16:43.228 } 00:16:43.228 ] 00:16:43.228 }' 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:16:43.228 02:33:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2330590 00:16:51.337 Initializing NVMe Controllers 00:16:51.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:51.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:51.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:51.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:51.337 Initialization complete. Launching workers. 00:16:51.337 ======================================================== 00:16:51.337 Latency(us) 00:16:51.337 Device Information : IOPS MiB/s Average min max 00:16:51.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4651.90 18.17 13759.56 2872.40 60489.04 00:16:51.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4684.00 18.30 13666.46 1940.25 59610.89 00:16:51.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4797.70 18.74 13356.06 1937.38 61205.40 00:16:51.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4307.30 16.83 14861.84 2143.35 60260.95 00:16:51.337 ======================================================== 00:16:51.337 Total : 18440.90 72.03 13888.40 1937.38 61205.40 00:16:51.337 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.337 rmmod nvme_tcp 00:16:51.337 rmmod nvme_fabrics 00:16:51.337 rmmod nvme_keyring 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2330549 ']' 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2330549 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2330549 ']' 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2330549 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2330549 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2330549' 00:16:51.337 killing process with pid 2330549 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2330549 00:16:51.337 [2024-05-15 02:33:38.689939] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:51.337 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2330549 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.595 02:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.916 02:33:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.916 02:33:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:54.916 00:16:54.916 real 0m44.957s 00:16:54.916 user 2m28.677s 00:16:54.916 sys 0m15.446s 00:16:54.916 02:33:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:54.916 02:33:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:54.916 ************************************ 00:16:54.916 END TEST nvmf_perf_adq 00:16:54.916 ************************************ 00:16:54.916 02:33:42 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:54.916 02:33:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:54.916 02:33:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.916 02:33:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.916 ************************************ 00:16:54.916 START TEST nvmf_shutdown 00:16:54.916 ************************************ 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:54.916 * Looking for test storage... 00:16:54.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:54.916 ************************************ 00:16:54.916 START TEST nvmf_shutdown_tc1 00:16:54.916 ************************************ 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:54.916 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.917 02:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:57.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:57.446 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:57.447 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:57.447 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:57.447 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.447 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:16:57.705 00:16:57.705 --- 10.0.0.2 ping statistics --- 00:16:57.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.705 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:16:57.705 00:16:57.705 --- 10.0.0.1 ping statistics --- 00:16:57.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.705 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2334294 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2334294 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2334294 ']' 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:57.705 02:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:57.705 [2024-05-15 02:33:44.960685] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:16:57.705 [2024-05-15 02:33:44.960787] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.705 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.705 [2024-05-15 02:33:45.043703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.964 [2024-05-15 02:33:45.161553] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.964 [2024-05-15 02:33:45.161616] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.964 [2024-05-15 02:33:45.161633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.964 [2024-05-15 02:33:45.161655] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.964 [2024-05-15 02:33:45.161667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.964 [2024-05-15 02:33:45.161771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.964 [2024-05-15 02:33:45.161869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.964 [2024-05-15 02:33:45.161969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.964 [2024-05-15 02:33:45.161965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.528 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:58.786 [2024-05-15 02:33:45.944945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.786 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.787 02:33:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:58.787 Malloc1 00:16:58.787 [2024-05-15 02:33:46.023747] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:58.787 [2024-05-15 02:33:46.024059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.787 Malloc2 00:16:58.787 Malloc3 00:16:58.787 Malloc4 00:16:58.787 Malloc5 00:16:59.045 Malloc6 00:16:59.045 Malloc7 00:16:59.045 Malloc8 00:16:59.045 Malloc9 00:16:59.045 Malloc10 00:16:59.045 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.045 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:59.045 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.045 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2334480 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2334480 /var/tmp/bdevperf.sock 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2334480 ']' 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.303 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.303 { 00:16:59.303 "params": { 00:16:59.303 "name": "Nvme$subsystem", 00:16:59.303 "trtype": "$TEST_TRANSPORT", 00:16:59.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.303 "adrfam": "ipv4", 00:16:59.303 "trsvcid": "$NVMF_PORT", 00:16:59.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.304 { 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme$subsystem", 00:16:59.304 "trtype": "$TEST_TRANSPORT", 00:16:59.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "$NVMF_PORT", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.304 "hdgst": ${hdgst:-false}, 00:16:59.304 "ddgst": ${ddgst:-false} 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 } 00:16:59.304 EOF 00:16:59.304 )") 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:16:59.304 02:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme1", 00:16:59.304 "trtype": "tcp", 00:16:59.304 "traddr": "10.0.0.2", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "4420", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.304 "hdgst": false, 00:16:59.304 "ddgst": false 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 },{ 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme2", 00:16:59.304 "trtype": "tcp", 00:16:59.304 "traddr": "10.0.0.2", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "4420", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:59.304 "hdgst": false, 00:16:59.304 "ddgst": false 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 },{ 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme3", 00:16:59.304 "trtype": "tcp", 00:16:59.304 "traddr": "10.0.0.2", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "4420", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:59.304 "hdgst": false, 00:16:59.304 "ddgst": false 00:16:59.304 }, 00:16:59.304 "method": "bdev_nvme_attach_controller" 00:16:59.304 },{ 00:16:59.304 "params": { 00:16:59.304 "name": "Nvme4", 00:16:59.304 "trtype": "tcp", 00:16:59.304 "traddr": "10.0.0.2", 00:16:59.304 "adrfam": "ipv4", 00:16:59.304 "trsvcid": "4420", 00:16:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:59.304 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:59.304 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 },{ 00:16:59.305 "params": { 00:16:59.305 "name": "Nvme5", 00:16:59.305 "trtype": "tcp", 00:16:59.305 "traddr": "10.0.0.2", 00:16:59.305 "adrfam": "ipv4", 00:16:59.305 "trsvcid": "4420", 00:16:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:59.305 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:59.305 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 },{ 00:16:59.305 "params": { 00:16:59.305 "name": "Nvme6", 00:16:59.305 "trtype": "tcp", 00:16:59.305 "traddr": "10.0.0.2", 00:16:59.305 "adrfam": "ipv4", 00:16:59.305 "trsvcid": "4420", 00:16:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:59.305 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:59.305 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 },{ 00:16:59.305 "params": { 00:16:59.305 "name": "Nvme7", 00:16:59.305 "trtype": "tcp", 00:16:59.305 "traddr": "10.0.0.2", 00:16:59.305 "adrfam": "ipv4", 00:16:59.305 "trsvcid": "4420", 00:16:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:59.305 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:59.305 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 },{ 00:16:59.305 "params": { 00:16:59.305 "name": "Nvme8", 00:16:59.305 "trtype": "tcp", 00:16:59.305 "traddr": "10.0.0.2", 00:16:59.305 "adrfam": "ipv4", 00:16:59.305 "trsvcid": "4420", 00:16:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:59.305 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:59.305 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 },{ 00:16:59.305 "params": { 00:16:59.305 "name": "Nvme9", 00:16:59.305 "trtype": "tcp", 00:16:59.305 "traddr": "10.0.0.2", 00:16:59.305 "adrfam": "ipv4", 00:16:59.305 "trsvcid": "4420", 00:16:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:59.305 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:59.305 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 },{ 00:16:59.305 "params": { 00:16:59.305 "name": "Nvme10", 00:16:59.305 "trtype": "tcp", 00:16:59.305 "traddr": "10.0.0.2", 00:16:59.305 "adrfam": "ipv4", 00:16:59.305 "trsvcid": "4420", 00:16:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:59.305 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:59.305 "hdgst": false, 00:16:59.305 "ddgst": false 00:16:59.305 }, 00:16:59.305 "method": "bdev_nvme_attach_controller" 00:16:59.305 }' 00:16:59.305 [2024-05-15 02:33:46.509908] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:16:59.305 [2024-05-15 02:33:46.510009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:59.305 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.305 [2024-05-15 02:33:46.583562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.305 [2024-05-15 02:33:46.694543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2334480 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:17:01.202 02:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:17:02.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2334480 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2334294 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.136 { 00:17:02.136 "params": { 00:17:02.136 "name": "Nvme$subsystem", 00:17:02.136 "trtype": "$TEST_TRANSPORT", 00:17:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.136 "adrfam": "ipv4", 00:17:02.136 "trsvcid": "$NVMF_PORT", 00:17:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.136 "hdgst": ${hdgst:-false}, 00:17:02.136 "ddgst": ${ddgst:-false} 00:17:02.136 }, 00:17:02.136 "method": "bdev_nvme_attach_controller" 00:17:02.136 } 00:17:02.136 EOF 00:17:02.136 )") 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.136 { 00:17:02.136 "params": { 00:17:02.136 "name": "Nvme$subsystem", 00:17:02.136 "trtype": "$TEST_TRANSPORT", 00:17:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.136 "adrfam": "ipv4", 00:17:02.136 "trsvcid": "$NVMF_PORT", 00:17:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.136 "hdgst": ${hdgst:-false}, 00:17:02.136 "ddgst": ${ddgst:-false} 00:17:02.136 }, 00:17:02.136 "method": "bdev_nvme_attach_controller" 00:17:02.136 } 00:17:02.136 EOF 00:17:02.136 )") 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.136 { 00:17:02.136 "params": { 00:17:02.136 "name": "Nvme$subsystem", 00:17:02.136 "trtype": "$TEST_TRANSPORT", 00:17:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.136 "adrfam": "ipv4", 00:17:02.136 "trsvcid": "$NVMF_PORT", 00:17:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.136 "hdgst": ${hdgst:-false}, 00:17:02.136 "ddgst": ${ddgst:-false} 00:17:02.136 }, 00:17:02.136 "method": "bdev_nvme_attach_controller" 00:17:02.136 } 00:17:02.136 EOF 00:17:02.136 )") 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.136 { 00:17:02.136 "params": { 00:17:02.136 "name": "Nvme$subsystem", 00:17:02.136 "trtype": "$TEST_TRANSPORT", 00:17:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.136 "adrfam": "ipv4", 00:17:02.136 "trsvcid": "$NVMF_PORT", 00:17:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.136 "hdgst": ${hdgst:-false}, 00:17:02.136 "ddgst": ${ddgst:-false} 00:17:02.136 }, 00:17:02.136 "method": "bdev_nvme_attach_controller" 00:17:02.136 } 00:17:02.136 EOF 00:17:02.136 )") 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.136 { 00:17:02.136 "params": { 00:17:02.136 "name": "Nvme$subsystem", 00:17:02.136 "trtype": "$TEST_TRANSPORT", 00:17:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.136 "adrfam": "ipv4", 00:17:02.136 "trsvcid": "$NVMF_PORT", 00:17:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.136 "hdgst": ${hdgst:-false}, 00:17:02.136 "ddgst": ${ddgst:-false} 00:17:02.136 }, 00:17:02.136 "method": "bdev_nvme_attach_controller" 00:17:02.136 } 00:17:02.136 EOF 00:17:02.136 )") 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.136 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.136 { 00:17:02.136 "params": { 00:17:02.136 "name": "Nvme$subsystem", 00:17:02.136 "trtype": "$TEST_TRANSPORT", 00:17:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.136 "adrfam": "ipv4", 00:17:02.136 "trsvcid": "$NVMF_PORT", 00:17:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.137 "hdgst": ${hdgst:-false}, 00:17:02.137 "ddgst": ${ddgst:-false} 00:17:02.137 }, 00:17:02.137 "method": "bdev_nvme_attach_controller" 00:17:02.137 } 00:17:02.137 EOF 00:17:02.137 )") 00:17:02.137 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.395 { 00:17:02.395 "params": { 00:17:02.395 "name": "Nvme$subsystem", 00:17:02.395 "trtype": "$TEST_TRANSPORT", 00:17:02.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.395 "adrfam": "ipv4", 00:17:02.395 "trsvcid": "$NVMF_PORT", 00:17:02.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.395 "hdgst": ${hdgst:-false}, 00:17:02.395 "ddgst": ${ddgst:-false} 00:17:02.395 }, 00:17:02.395 "method": "bdev_nvme_attach_controller" 00:17:02.395 } 00:17:02.395 EOF 00:17:02.395 )") 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.395 { 00:17:02.395 "params": { 00:17:02.395 "name": "Nvme$subsystem", 00:17:02.395 "trtype": "$TEST_TRANSPORT", 00:17:02.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.395 "adrfam": "ipv4", 00:17:02.395 "trsvcid": "$NVMF_PORT", 00:17:02.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.395 "hdgst": ${hdgst:-false}, 00:17:02.395 "ddgst": ${ddgst:-false} 00:17:02.395 }, 00:17:02.395 "method": "bdev_nvme_attach_controller" 00:17:02.395 } 00:17:02.395 EOF 00:17:02.395 )") 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.395 { 00:17:02.395 "params": { 00:17:02.395 "name": "Nvme$subsystem", 00:17:02.395 "trtype": "$TEST_TRANSPORT", 00:17:02.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.395 "adrfam": "ipv4", 00:17:02.395 "trsvcid": "$NVMF_PORT", 00:17:02.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.395 "hdgst": ${hdgst:-false}, 00:17:02.395 "ddgst": ${ddgst:-false} 00:17:02.395 }, 00:17:02.395 "method": "bdev_nvme_attach_controller" 00:17:02.395 } 00:17:02.395 EOF 00:17:02.395 )") 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.395 { 00:17:02.395 "params": { 00:17:02.395 "name": "Nvme$subsystem", 00:17:02.395 "trtype": "$TEST_TRANSPORT", 00:17:02.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.395 "adrfam": "ipv4", 00:17:02.395 "trsvcid": "$NVMF_PORT", 00:17:02.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.395 "hdgst": ${hdgst:-false}, 00:17:02.395 "ddgst": ${ddgst:-false} 00:17:02.395 }, 00:17:02.395 "method": "bdev_nvme_attach_controller" 00:17:02.395 } 00:17:02.395 EOF 00:17:02.395 )") 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:17:02.395 02:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.395 "params": { 00:17:02.395 "name": "Nvme1", 00:17:02.395 "trtype": "tcp", 00:17:02.395 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme2", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme3", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme4", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme5", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme6", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme7", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme8", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme9", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 },{ 00:17:02.396 "params": { 00:17:02.396 "name": "Nvme10", 00:17:02.396 "trtype": "tcp", 00:17:02.396 "traddr": "10.0.0.2", 00:17:02.396 "adrfam": "ipv4", 00:17:02.396 "trsvcid": "4420", 00:17:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:02.396 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:02.396 "hdgst": false, 00:17:02.396 "ddgst": false 00:17:02.396 }, 00:17:02.396 "method": "bdev_nvme_attach_controller" 00:17:02.396 }' 00:17:02.396 [2024-05-15 02:33:49.573673] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:02.396 [2024-05-15 02:33:49.573766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334902 ] 00:17:02.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.396 [2024-05-15 02:33:49.649469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.396 [2024-05-15 02:33:49.765366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.295 Running I/O for 1 seconds... 00:17:05.228 00:17:05.228 Latency(us) 00:17:05.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.228 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme1n1 : 1.08 177.92 11.12 0.00 0.00 356119.13 22913.33 292047.83 00:17:05.228 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme2n1 : 1.06 242.25 15.14 0.00 0.00 256730.26 21748.24 237677.23 00:17:05.228 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme3n1 : 1.11 230.75 14.42 0.00 0.00 265522.06 24369.68 265639.25 00:17:05.228 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme4n1 : 1.12 228.85 14.30 0.00 0.00 263216.17 22524.97 268746.15 00:17:05.228 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme5n1 : 1.17 218.35 13.65 0.00 0.00 272020.10 23107.51 309135.74 00:17:05.228 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme6n1 : 1.13 226.74 14.17 0.00 0.00 256577.80 22330.79 260978.92 00:17:05.228 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme7n1 : 1.12 227.74 14.23 0.00 0.00 250968.94 22427.88 270299.59 00:17:05.228 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme8n1 : 1.14 225.22 14.08 0.00 0.00 249521.87 21359.88 270299.59 00:17:05.228 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme9n1 : 1.18 217.21 13.58 0.00 0.00 255629.65 16311.18 315349.52 00:17:05.228 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.228 Verification LBA range: start 0x0 length 0x400 00:17:05.228 Nvme10n1 : 1.20 267.51 16.72 0.00 0.00 204596.87 13107.20 274959.93 00:17:05.228 =================================================================================================================== 00:17:05.228 Total : 2262.55 141.41 0.00 0.00 259302.23 13107.20 315349.52 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.486 rmmod nvme_tcp 00:17:05.486 rmmod nvme_fabrics 00:17:05.486 rmmod nvme_keyring 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2334294 ']' 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2334294 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 2334294 ']' 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 2334294 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2334294 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2334294' 00:17:05.486 killing process with pid 2334294 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 2334294 00:17:05.486 [2024-05-15 02:33:52.799809] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:05.486 02:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 2334294 00:17:06.051 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.052 02:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.582 00:17:08.582 real 0m13.178s 00:17:08.582 user 0m37.527s 00:17:08.582 sys 0m3.704s 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:08.582 ************************************ 00:17:08.582 END TEST nvmf_shutdown_tc1 00:17:08.582 ************************************ 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:08.582 ************************************ 00:17:08.582 START TEST nvmf_shutdown_tc2 00:17:08.582 ************************************ 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.582 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.582 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.582 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.583 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.583 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:08.583 00:17:08.583 --- 10.0.0.2 ping statistics --- 00:17:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.583 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:17:08.583 00:17:08.583 --- 10.0.0.1 ping statistics --- 00:17:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.583 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2335672 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2335672 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2335672 ']' 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.583 [2024-05-15 02:33:55.666419] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:08.583 [2024-05-15 02:33:55.666503] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.583 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.583 [2024-05-15 02:33:55.745781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.583 [2024-05-15 02:33:55.865400] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.583 [2024-05-15 02:33:55.865456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.583 [2024-05-15 02:33:55.865472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.583 [2024-05-15 02:33:55.865485] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.583 [2024-05-15 02:33:55.865496] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.583 [2024-05-15 02:33:55.865575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.583 [2024-05-15 02:33:55.865689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.583 [2024-05-15 02:33:55.865755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.583 [2024-05-15 02:33:55.865758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.583 02:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.842 [2024-05-15 02:33:56.024849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.842 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:08.842 Malloc1 00:17:08.842 [2024-05-15 02:33:56.114100] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:08.842 [2024-05-15 02:33:56.114431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.842 Malloc2 00:17:08.842 Malloc3 00:17:08.842 Malloc4 00:17:09.100 Malloc5 00:17:09.100 Malloc6 00:17:09.100 Malloc7 00:17:09.100 Malloc8 00:17:09.100 Malloc9 00:17:09.357 Malloc10 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2335849 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2335849 /var/tmp/bdevperf.sock 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2335849 ']' 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.358 { 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme$subsystem", 00:17:09.358 "trtype": "$TEST_TRANSPORT", 00:17:09.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "$NVMF_PORT", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.358 "hdgst": ${hdgst:-false}, 00:17:09.358 "ddgst": ${ddgst:-false} 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 } 00:17:09.358 EOF 00:17:09.358 )") 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:17:09.358 02:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme1", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme2", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme3", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme4", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme5", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme6", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme7", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme8", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme9", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 },{ 00:17:09.358 "params": { 00:17:09.358 "name": "Nvme10", 00:17:09.358 "trtype": "tcp", 00:17:09.358 "traddr": "10.0.0.2", 00:17:09.358 "adrfam": "ipv4", 00:17:09.358 "trsvcid": "4420", 00:17:09.358 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:09.358 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:09.358 "hdgst": false, 00:17:09.358 "ddgst": false 00:17:09.358 }, 00:17:09.358 "method": "bdev_nvme_attach_controller" 00:17:09.358 }' 00:17:09.358 [2024-05-15 02:33:56.628381] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:09.358 [2024-05-15 02:33:56.628466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2335849 ] 00:17:09.358 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.358 [2024-05-15 02:33:56.701687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.616 [2024-05-15 02:33:56.812887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.987 Running I/O for 10 seconds... 00:17:10.987 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.987 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:17:10.987 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:10.987 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.987 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:11.287 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:11.288 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:11.545 02:33:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2335849 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2335849 ']' 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2335849 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2335849 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2335849' 00:17:11.803 killing process with pid 2335849 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2335849 00:17:11.803 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2335849 00:17:12.061 Received shutdown signal, test time was about 0.941120 seconds 00:17:12.061 00:17:12.061 Latency(us) 00:17:12.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.061 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme1n1 : 0.90 284.42 17.78 0.00 0.00 222032.40 22622.06 243891.01 00:17:12.061 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme2n1 : 0.90 142.39 8.90 0.00 0.00 426418.63 30292.20 349525.33 00:17:12.061 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme3n1 : 0.92 279.37 17.46 0.00 0.00 217109.43 22719.15 251658.24 00:17:12.061 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme4n1 : 0.91 210.80 13.17 0.00 0.00 281887.98 22136.60 271853.04 00:17:12.061 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme5n1 : 0.89 287.33 17.96 0.00 0.00 200634.97 23107.51 226803.11 00:17:12.061 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme6n1 : 0.87 146.49 9.16 0.00 0.00 383816.44 25437.68 324670.20 00:17:12.061 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme7n1 : 0.94 239.29 14.96 0.00 0.00 217221.80 18447.17 254765.13 00:17:12.061 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme8n1 : 0.87 220.26 13.77 0.00 0.00 244824.49 24466.77 251658.24 00:17:12.061 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme9n1 : 0.91 280.27 17.52 0.00 0.00 189435.83 21942.42 250104.79 00:17:12.061 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.061 Verification LBA range: start 0x0 length 0x400 00:17:12.061 Nvme10n1 : 0.91 141.24 8.83 0.00 0.00 367818.52 43496.49 310689.19 00:17:12.061 =================================================================================================================== 00:17:12.061 Total : 2231.87 139.49 0.00 0.00 254374.32 18447.17 349525.33 00:17:12.319 02:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2335672 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.250 rmmod nvme_tcp 00:17:13.250 rmmod nvme_fabrics 00:17:13.250 rmmod nvme_keyring 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2335672 ']' 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2335672 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2335672 ']' 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2335672 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2335672 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2335672' 00:17:13.250 killing process with pid 2335672 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2335672 00:17:13.250 [2024-05-15 02:34:00.600677] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:13.250 02:34:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2335672 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.815 02:34:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.347 00:17:16.347 real 0m7.715s 00:17:16.347 user 0m22.597s 00:17:16.347 sys 0m1.599s 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:16.347 ************************************ 00:17:16.347 END TEST nvmf_shutdown_tc2 00:17:16.347 ************************************ 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:16.347 ************************************ 00:17:16.347 START TEST nvmf_shutdown_tc3 00:17:16.347 ************************************ 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.347 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:17:16.348 00:17:16.348 --- 10.0.0.2 ping statistics --- 00:17:16.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.348 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:17:16.348 00:17:16.348 --- 10.0.0.1 ping statistics --- 00:17:16.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.348 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2336768 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2336768 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2336768 ']' 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.348 02:34:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.348 [2024-05-15 02:34:03.440911] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:16.348 [2024-05-15 02:34:03.441018] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.348 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.348 [2024-05-15 02:34:03.517516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.348 [2024-05-15 02:34:03.628252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.348 [2024-05-15 02:34:03.628306] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.348 [2024-05-15 02:34:03.628319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.348 [2024-05-15 02:34:03.628331] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.348 [2024-05-15 02:34:03.628340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.348 [2024-05-15 02:34:03.628438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.348 [2024-05-15 02:34:03.628512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.348 [2024-05-15 02:34:03.628571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.348 [2024-05-15 02:34:03.628574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 [2024-05-15 02:34:04.414865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.280 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.280 Malloc1 00:17:17.280 [2024-05-15 02:34:04.491409] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:17.280 [2024-05-15 02:34:04.491724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.280 Malloc2 00:17:17.280 Malloc3 00:17:17.280 Malloc4 00:17:17.280 Malloc5 00:17:17.538 Malloc6 00:17:17.538 Malloc7 00:17:17.538 Malloc8 00:17:17.538 Malloc9 00:17:17.538 Malloc10 00:17:17.538 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.538 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:17.538 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.538 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2337067 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2337067 /var/tmp/bdevperf.sock 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2337067 ']' 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.796 { 00:17:17.796 "params": { 00:17:17.796 "name": "Nvme$subsystem", 00:17:17.796 "trtype": "$TEST_TRANSPORT", 00:17:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.796 "adrfam": "ipv4", 00:17:17.796 "trsvcid": "$NVMF_PORT", 00:17:17.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.796 "hdgst": ${hdgst:-false}, 00:17:17.796 "ddgst": ${ddgst:-false} 00:17:17.796 }, 00:17:17.796 "method": "bdev_nvme_attach_controller" 00:17:17.796 } 00:17:17.796 EOF 00:17:17.796 )") 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.796 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.796 { 00:17:17.796 "params": { 00:17:17.796 "name": "Nvme$subsystem", 00:17:17.796 "trtype": "$TEST_TRANSPORT", 00:17:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.797 { 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme$subsystem", 00:17:17.797 "trtype": "$TEST_TRANSPORT", 00:17:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "$NVMF_PORT", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.797 "hdgst": ${hdgst:-false}, 00:17:17.797 "ddgst": ${ddgst:-false} 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 } 00:17:17.797 EOF 00:17:17.797 )") 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:17:17.797 02:34:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme1", 00:17:17.797 "trtype": "tcp", 00:17:17.797 "traddr": "10.0.0.2", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "4420", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.797 "hdgst": false, 00:17:17.797 "ddgst": false 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 },{ 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme2", 00:17:17.797 "trtype": "tcp", 00:17:17.797 "traddr": "10.0.0.2", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "4420", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:17.797 "hdgst": false, 00:17:17.797 "ddgst": false 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 },{ 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme3", 00:17:17.797 "trtype": "tcp", 00:17:17.797 "traddr": "10.0.0.2", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "4420", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:17.797 "hdgst": false, 00:17:17.797 "ddgst": false 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 },{ 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme4", 00:17:17.797 "trtype": "tcp", 00:17:17.797 "traddr": "10.0.0.2", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "4420", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:17.797 "hdgst": false, 00:17:17.797 "ddgst": false 00:17:17.797 }, 00:17:17.797 "method": "bdev_nvme_attach_controller" 00:17:17.797 },{ 00:17:17.797 "params": { 00:17:17.797 "name": "Nvme5", 00:17:17.797 "trtype": "tcp", 00:17:17.797 "traddr": "10.0.0.2", 00:17:17.797 "adrfam": "ipv4", 00:17:17.797 "trsvcid": "4420", 00:17:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:17.797 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:17.798 "hdgst": false, 00:17:17.798 "ddgst": false 00:17:17.798 }, 00:17:17.798 "method": "bdev_nvme_attach_controller" 00:17:17.798 },{ 00:17:17.798 "params": { 00:17:17.798 "name": "Nvme6", 00:17:17.798 "trtype": "tcp", 00:17:17.798 "traddr": "10.0.0.2", 00:17:17.798 "adrfam": "ipv4", 00:17:17.798 "trsvcid": "4420", 00:17:17.798 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:17.798 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:17.798 "hdgst": false, 00:17:17.798 "ddgst": false 00:17:17.798 }, 00:17:17.798 "method": "bdev_nvme_attach_controller" 00:17:17.798 },{ 00:17:17.798 "params": { 00:17:17.798 "name": "Nvme7", 00:17:17.798 "trtype": "tcp", 00:17:17.798 "traddr": "10.0.0.2", 00:17:17.798 "adrfam": "ipv4", 00:17:17.798 "trsvcid": "4420", 00:17:17.798 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:17.798 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:17.798 "hdgst": false, 00:17:17.798 "ddgst": false 00:17:17.798 }, 00:17:17.798 "method": "bdev_nvme_attach_controller" 00:17:17.798 },{ 00:17:17.798 "params": { 00:17:17.798 "name": "Nvme8", 00:17:17.798 "trtype": "tcp", 00:17:17.798 "traddr": "10.0.0.2", 00:17:17.798 "adrfam": "ipv4", 00:17:17.798 "trsvcid": "4420", 00:17:17.798 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:17.798 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:17.798 "hdgst": false, 00:17:17.798 "ddgst": false 00:17:17.798 }, 00:17:17.798 "method": "bdev_nvme_attach_controller" 00:17:17.798 },{ 00:17:17.798 "params": { 00:17:17.798 "name": "Nvme9", 00:17:17.798 "trtype": "tcp", 00:17:17.798 "traddr": "10.0.0.2", 00:17:17.798 "adrfam": "ipv4", 00:17:17.798 "trsvcid": "4420", 00:17:17.798 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:17.798 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:17.798 "hdgst": false, 00:17:17.798 "ddgst": false 00:17:17.798 }, 00:17:17.798 "method": "bdev_nvme_attach_controller" 00:17:17.798 },{ 00:17:17.798 "params": { 00:17:17.798 "name": "Nvme10", 00:17:17.798 "trtype": "tcp", 00:17:17.798 "traddr": "10.0.0.2", 00:17:17.798 "adrfam": "ipv4", 00:17:17.798 "trsvcid": "4420", 00:17:17.798 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:17.798 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:17.798 "hdgst": false, 00:17:17.798 "ddgst": false 00:17:17.798 }, 00:17:17.798 "method": "bdev_nvme_attach_controller" 00:17:17.798 }' 00:17:17.798 [2024-05-15 02:34:05.008192] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:17.798 [2024-05-15 02:34:05.008294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337067 ] 00:17:17.798 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.798 [2024-05-15 02:34:05.085405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.798 [2024-05-15 02:34:05.195801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.694 Running I/O for 10 seconds... 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2336768 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 2336768 ']' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 2336768 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2336768 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2336768' 00:17:20.643 killing process with pid 2336768 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 2336768 00:17:20.643 [2024-05-15 02:34:07.824320] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:20.643 02:34:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 2336768 00:17:20.643 [2024-05-15 02:34:07.825026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.643 [2024-05-15 02:34:07.825533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.825829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704930 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.828995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.829007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.644 [2024-05-15 02:34:07.829019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.829031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.829043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.829055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704dd0 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.830993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.831464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705270 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.645 [2024-05-15 02:34:07.834310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with [2024-05-15 02:34:07.834641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:1the state(5) to be set 00:17:20.646 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1[2024-05-15 02:34:07.834726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.834941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.834967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.834991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.834991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.835008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.835054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705710 is same with the state(5) to be set 00:17:20.646 [2024-05-15 02:34:07.835071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-05-15 02:34:07.835387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-05-15 02:34:07.835402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.835954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.835975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.835994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.835999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.836009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.836015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.836021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.836030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.836034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.836047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with [2024-05-15 02:34:07.836046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12the state(5) to be set 00:17:20.647 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.836062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.836063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.836074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.836079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-05-15 02:34:07.836087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.647 [2024-05-15 02:34:07.836094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-05-15 02:34:07.836099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-05-15 02:34:07.836111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12[2024-05-15 02:34:07.836143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with [2024-05-15 02:34:07.836162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:20.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.836227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-05-15 02:34:07.836280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with [2024-05-15 02:34:07.836296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:20.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.836394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with [2024-05-15 02:34:07.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:20.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-05-15 02:34:07.836589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-05-15 02:34:07.836673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.836697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-05-15 02:34:07.836722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.648 [2024-05-15 02:34:07.836728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.836737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.836749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.836761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.836806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:20.649 [2024-05-15 02:34:07.836819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705bb0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.837411] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14d8550 was disconnected and freed. reset controller. 00:17:20.649 [2024-05-15 02:34:07.837556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8100 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.837723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14346b0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.837897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.837984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.837999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dad00 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.838072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706050 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.838177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14107c0 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.838234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143bd10 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.838414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.649 [2024-05-15 02:34:07.838519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b730 is same with the state(5) to be set 00:17:20.649 [2024-05-15 02:34:07.838811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.838835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.838871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.838902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.838941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.838960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.838990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.839007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.839022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.839038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.839053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.649 [2024-05-15 02:34:07.839070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.649 [2024-05-15 02:34:07.839084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with [2024-05-15 02:34:07.839532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(5) to be set 00:17:20.650 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-05-15 02:34:07.839599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with [2024-05-15 02:34:07.839615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:20.650 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.839681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with [2024-05-15 02:34:07.839828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:20.650 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.650 [2024-05-15 02:34:07.839867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.650 [2024-05-15 02:34:07.839874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.650 [2024-05-15 02:34:07.839888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.839893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.839906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.839917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.839920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.839954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.839955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.839968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.839973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.839985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.839990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.839998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-05-15 02:34:07.840023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.840140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 02:34:07.840204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with [2024-05-15 02:34:07.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:17:20.651 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-05-15 02:34:07.840309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 the state(5) to be set 00:17:20.651 [2024-05-15 02:34:07.840323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17064f0 is same with [2024-05-15 02:34:07.840323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:20.651 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.651 [2024-05-15 02:34:07.840764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.651 [2024-05-15 02:34:07.840778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.652 [2024-05-15 02:34:07.840799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.652 [2024-05-15 02:34:07.840814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.652 [2024-05-15 02:34:07.840829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.652 [2024-05-15 02:34:07.840843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.652 [2024-05-15 02:34:07.840858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.652 [2024-05-15 02:34:07.840872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.652 [2024-05-15 02:34:07.840888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.652 [2024-05-15 02:34:07.841354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.841963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706990 is same with the state(5) to be set 00:17:20.652 [2024-05-15 02:34:07.842951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e440 is same with the state(5) to be set 00:17:20.653 [2024-05-15 02:34:07.842993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e440 is same with the state(5) to be set 00:17:20.653 [2024-05-15 02:34:07.858644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.858759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.858778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.858948] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x155f400 was disconnected and freed. reset controller. 00:17:20.653 [2024-05-15 02:34:07.859612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.859970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.859987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.653 [2024-05-15 02:34:07.860820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.653 [2024-05-15 02:34:07.860836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.860850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.860866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.860881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.860898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.860912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.860933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.860950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.860966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.860998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.861678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.861721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:20.654 [2024-05-15 02:34:07.861799] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1409570 was disconnected and freed. reset controller. 00:17:20.654 [2024-05-15 02:34:07.862405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.654 [2024-05-15 02:34:07.862796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.654 [2024-05-15 02:34:07.862811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.862827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.862842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.862858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.862872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.862888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.862902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.862919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.862956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.862974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.862989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.863973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.863988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.864004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.864018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.864034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.864048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.864065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.655 [2024-05-15 02:34:07.864079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.655 [2024-05-15 02:34:07.864095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.656 [2024-05-15 02:34:07.864421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.864460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:20.656 [2024-05-15 02:34:07.864544] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14d7050 was disconnected and freed. reset controller. 00:17:20.656 [2024-05-15 02:34:07.865784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.865808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.865825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.865844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.865859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.865873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.865888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.865901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.865915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5960 is same with the state(5) to be set 00:17:20.656 [2024-05-15 02:34:07.865968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.865989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.866005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.866019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.866033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.878966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3fb0 is same with the state(5) to be set 00:17:20.656 [2024-05-15 02:34:07.879215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cb230 is same with the state(5) to be set 00:17:20.656 [2024-05-15 02:34:07.879379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8100 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.879410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14346b0 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.879452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dad00 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.879486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14107c0 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.879518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143bd10 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.879548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6b730 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.879600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.656 [2024-05-15 02:34:07.879707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.656 [2024-05-15 02:34:07.879721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145fe40 is same with the state(5) to be set 00:17:20.656 [2024-05-15 02:34:07.883593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:20.656 [2024-05-15 02:34:07.883653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:20.656 [2024-05-15 02:34:07.883752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5960 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.883786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b3fb0 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.883810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cb230 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.883871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145fe40 (9): Bad file descriptor 00:17:20.656 [2024-05-15 02:34:07.884595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:20.657 [2024-05-15 02:34:07.884629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:20.657 [2024-05-15 02:34:07.884894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.657 [2024-05-15 02:34:07.885080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.657 [2024-05-15 02:34:07.885109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8100 with addr=10.0.0.2, port=4420 00:17:20.657 [2024-05-15 02:34:07.885128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8100 is same with the state(5) to be set 00:17:20.657 [2024-05-15 02:34:07.885290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.657 [2024-05-15 02:34:07.885446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.657 [2024-05-15 02:34:07.885472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14346b0 with addr=10.0.0.2, port=4420 00:17:20.657 [2024-05-15 02:34:07.885509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14346b0 is same with the state(5) to be set 00:17:20.657 [2024-05-15 02:34:07.885583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.885963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.885978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.657 [2024-05-15 02:34:07.886748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.657 [2024-05-15 02:34:07.886763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.886978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.886992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.887606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.887621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.888889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.888914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.888943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.888962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.888979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.888994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.658 [2024-05-15 02:34:07.889266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.658 [2024-05-15 02:34:07.889281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.889973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.889989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.659 [2024-05-15 02:34:07.890661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.659 [2024-05-15 02:34:07.890676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.890899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.890914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.893369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.660 [2024-05-15 02:34:07.893407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:20.660 [2024-05-15 02:34:07.893653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.660 [2024-05-15 02:34:07.893836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.660 [2024-05-15 02:34:07.893864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b5960 with addr=10.0.0.2, port=4420 00:17:20.660 [2024-05-15 02:34:07.893882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5960 is same with the state(5) to be set 00:17:20.660 [2024-05-15 02:34:07.894081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.660 [2024-05-15 02:34:07.894246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.660 [2024-05-15 02:34:07.894270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145fe40 with addr=10.0.0.2, port=4420 00:17:20.660 [2024-05-15 02:34:07.894287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145fe40 is same with the state(5) to be set 00:17:20.660 [2024-05-15 02:34:07.894313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8100 (9): Bad file descriptor 00:17:20.660 [2024-05-15 02:34:07.894334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14346b0 (9): Bad file descriptor 00:17:20.660 [2024-05-15 02:34:07.894462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.894969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.894984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.660 [2024-05-15 02:34:07.895272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.660 [2024-05-15 02:34:07.895288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.895973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.895989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.896492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.896508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.897789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.897814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.897835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.897851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.897868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.897883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.897899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.661 [2024-05-15 02:34:07.897919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.661 [2024-05-15 02:34:07.897943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.897959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.897975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.897991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.898984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.662 [2024-05-15 02:34:07.899359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.662 [2024-05-15 02:34:07.899374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.899796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.899811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.901115] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.663 [2024-05-15 02:34:07.901208] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.663 [2024-05-15 02:34:07.901268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:20.663 [2024-05-15 02:34:07.901297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:20.663 [2024-05-15 02:34:07.901628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.901805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.901831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14107c0 with addr=10.0.0.2, port=4420 00:17:20.663 [2024-05-15 02:34:07.901847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14107c0 is same with the state(5) to be set 00:17:20.663 [2024-05-15 02:34:07.902011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.902172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.902196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dad00 with addr=10.0.0.2, port=4420 00:17:20.663 [2024-05-15 02:34:07.902212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dad00 is same with the state(5) to be set 00:17:20.663 [2024-05-15 02:34:07.902237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5960 (9): Bad file descriptor 00:17:20.663 [2024-05-15 02:34:07.902258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145fe40 (9): Bad file descriptor 00:17:20.663 [2024-05-15 02:34:07.902276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:20.663 [2024-05-15 02:34:07.902290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:20.663 [2024-05-15 02:34:07.902307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:20.663 [2024-05-15 02:34:07.902332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:20.663 [2024-05-15 02:34:07.902347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:20.663 [2024-05-15 02:34:07.902360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:20.663 [2024-05-15 02:34:07.902416] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.663 [2024-05-15 02:34:07.902439] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.663 [2024-05-15 02:34:07.902462] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.663 [2024-05-15 02:34:07.902482] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.663 [2024-05-15 02:34:07.903153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.663 [2024-05-15 02:34:07.903177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.663 [2024-05-15 02:34:07.903349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.903533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.903558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143bd10 with addr=10.0.0.2, port=4420 00:17:20.663 [2024-05-15 02:34:07.903574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143bd10 is same with the state(5) to be set 00:17:20.663 [2024-05-15 02:34:07.903761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.903923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.663 [2024-05-15 02:34:07.903953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6b730 with addr=10.0.0.2, port=4420 00:17:20.663 [2024-05-15 02:34:07.903969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b730 is same with the state(5) to be set 00:17:20.663 [2024-05-15 02:34:07.903988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14107c0 (9): Bad file descriptor 00:17:20.663 [2024-05-15 02:34:07.904007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dad00 (9): Bad file descriptor 00:17:20.663 [2024-05-15 02:34:07.904029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:20.663 [2024-05-15 02:34:07.904043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:20.663 [2024-05-15 02:34:07.904057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:20.663 [2024-05-15 02:34:07.904076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:20.663 [2024-05-15 02:34:07.904090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:20.663 [2024-05-15 02:34:07.904103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:20.663 [2024-05-15 02:34:07.904694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.904990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.905006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.905043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.905059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.905076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.905091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.905107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.905122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.663 [2024-05-15 02:34:07.905138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.663 [2024-05-15 02:34:07.905153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.905976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.905991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.906442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.906457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.664 [2024-05-15 02:34:07.916004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.664 [2024-05-15 02:34:07.916064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.916330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.916347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aa70 is same with the state(5) to be set 00:17:20.665 [2024-05-15 02:34:07.917719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.917980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.917997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.665 [2024-05-15 02:34:07.918923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.665 [2024-05-15 02:34:07.918950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.918978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.918994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.666 [2024-05-15 02:34:07.919761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.666 [2024-05-15 02:34:07.919776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bf70 is same with the state(5) to be set 00:17:20.666 [2024-05-15 02:34:07.921884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.666 [2024-05-15 02:34:07.921910] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.666 [2024-05-15 02:34:07.921928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:20.666 task offset: 20480 on job bdev=Nvme10n1 fails 00:17:20.666 00:17:20.666 Latency(us) 00:17:20.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.666 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme1n1 ended in about 0.82 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme1n1 : 0.82 156.39 9.77 78.19 0.00 269471.29 21068.61 251658.24 00:17:20.666 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme2n1 ended in about 0.82 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme2n1 : 0.82 155.77 9.74 77.88 0.00 264347.24 21359.88 246997.90 00:17:20.666 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme3n1 ended in about 0.83 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme3n1 : 0.83 77.36 4.83 77.36 0.00 390192.55 58254.22 320009.86 00:17:20.666 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme4n1 ended in about 0.81 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme4n1 : 0.81 236.79 14.80 78.93 0.00 186227.48 22427.88 248551.35 00:17:20.666 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme5n1 ended in about 0.83 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme5n1 : 0.83 154.10 9.63 77.05 0.00 248994.20 7524.50 253211.69 00:17:20.666 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme6n1 ended in about 0.81 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme6n1 : 0.81 157.64 9.85 78.82 0.00 236909.35 21845.33 267192.70 00:17:20.666 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme7n1 ended in about 0.85 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme7n1 : 0.85 75.55 4.72 75.55 0.00 364198.12 43690.67 320009.86 00:17:20.666 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme8n1 ended in about 0.85 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme8n1 : 0.85 150.49 9.41 75.25 0.00 237967.74 23301.69 285834.05 00:17:20.666 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme9n1 ended in about 0.81 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme9n1 : 0.81 157.39 9.84 78.70 0.00 219621.45 20291.89 220589.32 00:17:20.666 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.666 Job: Nvme10n1 ended in about 0.80 seconds with error 00:17:20.666 Verification LBA range: start 0x0 length 0x400 00:17:20.666 Nvme10n1 : 0.80 160.92 10.06 80.46 0.00 207945.83 23690.05 236123.78 00:17:20.666 =================================================================================================================== 00:17:20.666 Total : 1482.39 92.65 778.18 0.00 252050.43 7524.50 320009.86 00:17:20.666 [2024-05-15 02:34:07.951058] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:20.666 [2024-05-15 02:34:07.951146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:20.666 [2024-05-15 02:34:07.951250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143bd10 (9): Bad file descriptor 00:17:20.666 [2024-05-15 02:34:07.951291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6b730 (9): Bad file descriptor 00:17:20.666 [2024-05-15 02:34:07.951310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.666 [2024-05-15 02:34:07.951324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:20.666 [2024-05-15 02:34:07.951342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.666 [2024-05-15 02:34:07.951370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.666 [2024-05-15 02:34:07.951385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:20.666 [2024-05-15 02:34:07.951400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.666 [2024-05-15 02:34:07.951431] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.666 [2024-05-15 02:34:07.951455] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.666 [2024-05-15 02:34:07.951534] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.666 [2024-05-15 02:34:07.951558] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.666 [2024-05-15 02:34:07.951709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.666 [2024-05-15 02:34:07.951732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.952076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.952413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.952446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b3fb0 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.952475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3fb0 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.952646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.952832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.952859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cb230 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.952876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cb230 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.952896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.952909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.952922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:20.667 [2024-05-15 02:34:07.952950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.952966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.952979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:20.667 [2024-05-15 02:34:07.953026] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.667 [2024-05-15 02:34:07.953050] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.667 [2024-05-15 02:34:07.953068] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.667 [2024-05-15 02:34:07.953086] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.667 [2024-05-15 02:34:07.953104] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.667 [2024-05-15 02:34:07.953122] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.667 [2024-05-15 02:34:07.953696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:20.667 [2024-05-15 02:34:07.953724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:20.667 [2024-05-15 02:34:07.953743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:20.667 [2024-05-15 02:34:07.953758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:20.667 [2024-05-15 02:34:07.953796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.953813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.953859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b3fb0 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.953883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cb230 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.953969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:20.667 [2024-05-15 02:34:07.954159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.954334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.954361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14346b0 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.954378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14346b0 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.954538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.954706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.954733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8100 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.954750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8100 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.954907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.955094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.955121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145fe40 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.955138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145fe40 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.955300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.955456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.955482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b5960 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.955499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5960 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.955514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.955528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.955542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:20.667 [2024-05-15 02:34:07.955561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.955575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.955588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:20.667 [2024-05-15 02:34:07.955633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.667 [2024-05-15 02:34:07.955665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.955683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.955851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.956026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.956052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dad00 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.956069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dad00 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.956088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14346b0 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.956108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8100 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.956127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145fe40 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.956144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5960 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.956362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.956545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.667 [2024-05-15 02:34:07.956570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14107c0 with addr=10.0.0.2, port=4420 00:17:20.667 [2024-05-15 02:34:07.956591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14107c0 is same with the state(5) to be set 00:17:20.667 [2024-05-15 02:34:07.956611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dad00 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.956628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.956642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.956656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:20.667 [2024-05-15 02:34:07.956673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.956688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.956702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:20.667 [2024-05-15 02:34:07.956718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.956732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.956745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:20.667 [2024-05-15 02:34:07.956761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.956775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.956789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:20.667 [2024-05-15 02:34:07.956842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.956862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.956874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.956887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.956903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14107c0 (9): Bad file descriptor 00:17:20.667 [2024-05-15 02:34:07.956920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.956941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.956956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.667 [2024-05-15 02:34:07.956995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.667 [2024-05-15 02:34:07.957013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.667 [2024-05-15 02:34:07.957027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:20.667 [2024-05-15 02:34:07.957040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.667 [2024-05-15 02:34:07.957078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.234 02:34:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:21.234 02:34:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2337067 00:17:22.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2337067) - No such process 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.169 rmmod nvme_tcp 00:17:22.169 rmmod nvme_fabrics 00:17:22.169 rmmod nvme_keyring 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.169 02:34:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.703 02:34:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.703 00:17:24.703 real 0m8.332s 00:17:24.703 user 0m21.794s 00:17:24.703 sys 0m1.503s 00:17:24.703 02:34:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:24.703 02:34:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:24.703 ************************************ 00:17:24.703 END TEST nvmf_shutdown_tc3 00:17:24.703 ************************************ 00:17:24.703 02:34:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:24.703 00:17:24.703 real 0m29.462s 00:17:24.703 user 1m22.012s 00:17:24.703 sys 0m6.957s 00:17:24.703 02:34:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:24.703 02:34:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:24.703 ************************************ 00:17:24.703 END TEST nvmf_shutdown 00:17:24.703 ************************************ 00:17:24.703 02:34:11 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.703 02:34:11 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.703 02:34:11 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:24.703 02:34:11 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:24.703 02:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.703 ************************************ 00:17:24.703 START TEST nvmf_multicontroller 00:17:24.703 ************************************ 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:24.703 * Looking for test storage... 00:17:24.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.703 02:34:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.601 02:34:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.601 02:34:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.601 02:34:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.601 02:34:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.601 02:34:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.601 02:34:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.601 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.859 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:17:26.859 00:17:26.860 --- 10.0.0.2 ping statistics --- 00:17:26.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.860 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:26.860 00:17:26.860 --- 10.0.0.1 ping statistics --- 00:17:26.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.860 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2339876 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2339876 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2339876 ']' 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:26.860 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.860 [2024-05-15 02:34:14.195714] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:26.860 [2024-05-15 02:34:14.195801] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.860 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.860 [2024-05-15 02:34:14.269816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.118 [2024-05-15 02:34:14.377779] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.118 [2024-05-15 02:34:14.377829] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.118 [2024-05-15 02:34:14.377849] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.118 [2024-05-15 02:34:14.377860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.118 [2024-05-15 02:34:14.377870] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.118 [2024-05-15 02:34:14.378018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.118 [2024-05-15 02:34:14.378080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.118 [2024-05-15 02:34:14.378084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.118 [2024-05-15 02:34:14.515115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.118 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 Malloc0 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 [2024-05-15 02:34:14.582540] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:27.405 [2024-05-15 02:34:14.582796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 [2024-05-15 02:34:14.590665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 Malloc1 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2339909 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2339909 /var/tmp/bdevperf.sock 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2339909 ']' 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.405 02:34:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.339 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:28.339 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:17:28.339 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:28.339 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.339 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.597 NVMe0n1 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.597 1 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:28.597 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.598 request: 00:17:28.598 { 00:17:28.598 "name": "NVMe0", 00:17:28.598 "trtype": "tcp", 00:17:28.598 "traddr": "10.0.0.2", 00:17:28.598 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:28.598 "hostaddr": "10.0.0.2", 00:17:28.598 "hostsvcid": "60000", 00:17:28.598 "adrfam": "ipv4", 00:17:28.598 "trsvcid": "4420", 00:17:28.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.598 "method": "bdev_nvme_attach_controller", 00:17:28.598 "req_id": 1 00:17:28.598 } 00:17:28.598 Got JSON-RPC error response 00:17:28.598 response: 00:17:28.598 { 00:17:28.598 "code": -114, 00:17:28.598 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:28.598 } 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.598 request: 00:17:28.598 { 00:17:28.598 "name": "NVMe0", 00:17:28.598 "trtype": "tcp", 00:17:28.598 "traddr": "10.0.0.2", 00:17:28.598 "hostaddr": "10.0.0.2", 00:17:28.598 "hostsvcid": "60000", 00:17:28.598 "adrfam": "ipv4", 00:17:28.598 "trsvcid": "4420", 00:17:28.598 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:28.598 "method": "bdev_nvme_attach_controller", 00:17:28.598 "req_id": 1 00:17:28.598 } 00:17:28.598 Got JSON-RPC error response 00:17:28.598 response: 00:17:28.598 { 00:17:28.598 "code": -114, 00:17:28.598 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:28.598 } 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.598 request: 00:17:28.598 { 00:17:28.598 "name": "NVMe0", 00:17:28.598 "trtype": "tcp", 00:17:28.598 "traddr": "10.0.0.2", 00:17:28.598 "hostaddr": "10.0.0.2", 00:17:28.598 "hostsvcid": "60000", 00:17:28.598 "adrfam": "ipv4", 00:17:28.598 "trsvcid": "4420", 00:17:28.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.598 "multipath": "disable", 00:17:28.598 "method": "bdev_nvme_attach_controller", 00:17:28.598 "req_id": 1 00:17:28.598 } 00:17:28.598 Got JSON-RPC error response 00:17:28.598 response: 00:17:28.598 { 00:17:28.598 "code": -114, 00:17:28.598 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:17:28.598 } 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.598 request: 00:17:28.598 { 00:17:28.598 "name": "NVMe0", 00:17:28.598 "trtype": "tcp", 00:17:28.598 "traddr": "10.0.0.2", 00:17:28.598 "hostaddr": "10.0.0.2", 00:17:28.598 "hostsvcid": "60000", 00:17:28.598 "adrfam": "ipv4", 00:17:28.598 "trsvcid": "4420", 00:17:28.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.598 "multipath": "failover", 00:17:28.598 "method": "bdev_nvme_attach_controller", 00:17:28.598 "req_id": 1 00:17:28.598 } 00:17:28.598 Got JSON-RPC error response 00:17:28.598 response: 00:17:28.598 { 00:17:28.598 "code": -114, 00:17:28.598 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:28.598 } 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.598 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.598 02:34:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.856 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:28.856 02:34:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:30.229 0 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2339909 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2339909 ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2339909 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2339909 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2339909' 00:17:30.229 killing process with pid 2339909 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2339909 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2339909 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:17:30.229 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:30.229 [2024-05-15 02:34:14.694499] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:30.229 [2024-05-15 02:34:14.694597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2339909 ] 00:17:30.229 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.229 [2024-05-15 02:34:14.764762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.229 [2024-05-15 02:34:14.874798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.229 [2024-05-15 02:34:16.093311] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 5c0241ff-a670-4b56-9b0c-afd8fc261959 already exists 00:17:30.229 [2024-05-15 02:34:16.093356] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:5c0241ff-a670-4b56-9b0c-afd8fc261959 alias for bdev NVMe1n1 00:17:30.229 [2024-05-15 02:34:16.093374] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:30.229 Running I/O for 1 seconds... 00:17:30.229 00:17:30.229 Latency(us) 00:17:30.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.229 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:30.229 NVMe0n1 : 1.01 17043.77 66.58 0.00 0.00 7476.50 6893.42 14175.19 00:17:30.229 =================================================================================================================== 00:17:30.229 Total : 17043.77 66.58 0.00 0.00 7476.50 6893.42 14175.19 00:17:30.229 Received shutdown signal, test time was about 1.000000 seconds 00:17:30.229 00:17:30.229 Latency(us) 00:17:30.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.229 =================================================================================================================== 00:17:30.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.229 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.229 rmmod nvme_tcp 00:17:30.229 rmmod nvme_fabrics 00:17:30.229 rmmod nvme_keyring 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2339876 ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2339876 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2339876 ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2339876 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2339876 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2339876' 00:17:30.229 killing process with pid 2339876 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2339876 00:17:30.229 [2024-05-15 02:34:17.637202] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:30.229 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2339876 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.797 02:34:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.701 02:34:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.701 00:17:32.701 real 0m8.336s 00:17:32.701 user 0m14.376s 00:17:32.701 sys 0m2.564s 00:17:32.701 02:34:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.701 02:34:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:32.701 ************************************ 00:17:32.701 END TEST nvmf_multicontroller 00:17:32.701 ************************************ 00:17:32.701 02:34:20 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:32.701 02:34:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:32.701 02:34:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.701 02:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.701 ************************************ 00:17:32.701 START TEST nvmf_aer 00:17:32.701 ************************************ 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:32.701 * Looking for test storage... 00:17:32.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.701 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.959 02:34:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:35.490 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:35.490 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:35.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.490 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:35.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:17:35.491 00:17:35.491 --- 10.0.0.2 ping statistics --- 00:17:35.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.491 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:17:35.491 00:17:35.491 --- 10.0.0.1 ping statistics --- 00:17:35.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.491 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2342544 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2342544 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 2342544 ']' 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:35.491 02:34:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:35.491 [2024-05-15 02:34:22.744392] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:35.491 [2024-05-15 02:34:22.744466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.491 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.491 [2024-05-15 02:34:22.818125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.749 [2024-05-15 02:34:22.930631] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.749 [2024-05-15 02:34:22.930679] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.749 [2024-05-15 02:34:22.930707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.749 [2024-05-15 02:34:22.930720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.749 [2024-05-15 02:34:22.930730] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.749 [2024-05-15 02:34:22.930818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.749 [2024-05-15 02:34:22.930860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.749 [2024-05-15 02:34:22.930917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.749 [2024-05-15 02:34:22.930920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 [2024-05-15 02:34:23.760032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 Malloc0 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 [2024-05-15 02:34:23.811283] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:36.681 [2024-05-15 02:34:23.811580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 [ 00:17:36.681 { 00:17:36.681 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:36.681 "subtype": "Discovery", 00:17:36.681 "listen_addresses": [], 00:17:36.681 "allow_any_host": true, 00:17:36.681 "hosts": [] 00:17:36.682 }, 00:17:36.682 { 00:17:36.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.682 "subtype": "NVMe", 00:17:36.682 "listen_addresses": [ 00:17:36.682 { 00:17:36.682 "trtype": "TCP", 00:17:36.682 "adrfam": "IPv4", 00:17:36.682 "traddr": "10.0.0.2", 00:17:36.682 "trsvcid": "4420" 00:17:36.682 } 00:17:36.682 ], 00:17:36.682 "allow_any_host": true, 00:17:36.682 "hosts": [], 00:17:36.682 "serial_number": "SPDK00000000000001", 00:17:36.682 "model_number": "SPDK bdev Controller", 00:17:36.682 "max_namespaces": 2, 00:17:36.682 "min_cntlid": 1, 00:17:36.682 "max_cntlid": 65519, 00:17:36.682 "namespaces": [ 00:17:36.682 { 00:17:36.682 "nsid": 1, 00:17:36.682 "bdev_name": "Malloc0", 00:17:36.682 "name": "Malloc0", 00:17:36.682 "nguid": "F6858E3296194DBB861DC887C9A9C904", 00:17:36.682 "uuid": "f6858e32-9619-4dbb-861d-c887c9a9c904" 00:17:36.682 } 00:17:36.682 ] 00:17:36.682 } 00:17:36.682 ] 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2342700 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:36.682 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:17:36.682 02:34:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.682 Malloc1 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.682 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.940 [ 00:17:36.940 { 00:17:36.940 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:36.940 "subtype": "Discovery", 00:17:36.940 "listen_addresses": [], 00:17:36.940 "allow_any_host": true, 00:17:36.940 "hosts": [] 00:17:36.940 }, 00:17:36.940 { 00:17:36.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.940 "subtype": "NVMe", 00:17:36.940 "listen_addresses": [ 00:17:36.940 { 00:17:36.940 "trtype": "TCP", 00:17:36.940 "adrfam": "IPv4", 00:17:36.940 "traddr": "10.0.0.2", 00:17:36.940 "trsvcid": "4420" 00:17:36.940 } 00:17:36.940 ], 00:17:36.940 "allow_any_host": true, 00:17:36.940 "hosts": [], 00:17:36.940 "serial_number": "SPDK00000000000001", 00:17:36.940 "model_number": "SPDK bdev Controller", 00:17:36.940 "max_namespaces": 2, 00:17:36.940 "min_cntlid": 1, 00:17:36.940 "max_cntlid": 65519, 00:17:36.940 "namespaces": [ 00:17:36.940 { 00:17:36.940 "nsid": 1, 00:17:36.940 "bdev_name": "Malloc0", 00:17:36.940 "name": "Malloc0", 00:17:36.940 "nguid": "F6858E3296194DBB861DC887C9A9C904", 00:17:36.940 "uuid": "f6858e32-9619-4dbb-861d-c887c9a9c904" 00:17:36.940 }, 00:17:36.940 { 00:17:36.940 "nsid": 2, 00:17:36.940 "bdev_name": "Malloc1", 00:17:36.940 "name": "Malloc1", 00:17:36.940 "nguid": "308C74381EED4D52B33757D91E7BE250", 00:17:36.940 "uuid": "308c7438-1eed-4d52-b337-57d91e7be250" 00:17:36.940 } 00:17:36.940 ] 00:17:36.940 } 00:17:36.940 ] 00:17:36.940 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.940 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2342700 00:17:36.940 Asynchronous Event Request test 00:17:36.940 Attaching to 10.0.0.2 00:17:36.940 Attached to 10.0.0.2 00:17:36.940 Registering asynchronous event callbacks... 00:17:36.940 Starting namespace attribute notice tests for all controllers... 00:17:36.940 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:36.940 aer_cb - Changed Namespace 00:17:36.940 Cleaning up... 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:36.941 rmmod nvme_tcp 00:17:36.941 rmmod nvme_fabrics 00:17:36.941 rmmod nvme_keyring 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2342544 ']' 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2342544 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 2342544 ']' 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 2342544 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2342544 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2342544' 00:17:36.941 killing process with pid 2342544 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 2342544 00:17:36.941 [2024-05-15 02:34:24.245851] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:36.941 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 2342544 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.200 02:34:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.732 02:34:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:39.732 00:17:39.732 real 0m6.518s 00:17:39.732 user 0m7.184s 00:17:39.732 sys 0m2.248s 00:17:39.732 02:34:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:39.732 02:34:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:39.732 ************************************ 00:17:39.732 END TEST nvmf_aer 00:17:39.732 ************************************ 00:17:39.732 02:34:26 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:39.732 02:34:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:39.732 02:34:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:39.732 02:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.732 ************************************ 00:17:39.732 START TEST nvmf_async_init 00:17:39.732 ************************************ 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:39.732 * Looking for test storage... 00:17:39.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.732 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d6fe568350884c92818db2319fb514da 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:17:39.733 02:34:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.633 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.891 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:41.892 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:41.892 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:41.892 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:41.892 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:17:41.892 00:17:41.892 --- 10.0.0.2 ping statistics --- 00:17:41.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.892 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:17:41.892 00:17:41.892 --- 10.0.0.1 ping statistics --- 00:17:41.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.892 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2345040 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2345040 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 2345040 ']' 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:41.892 02:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:41.892 [2024-05-15 02:34:29.268671] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:41.892 [2024-05-15 02:34:29.268746] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.150 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.150 [2024-05-15 02:34:29.344454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.150 [2024-05-15 02:34:29.455129] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.150 [2024-05-15 02:34:29.455185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.150 [2024-05-15 02:34:29.455202] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.150 [2024-05-15 02:34:29.455227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.150 [2024-05-15 02:34:29.455239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.150 [2024-05-15 02:34:29.455271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 [2024-05-15 02:34:30.283321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 null0 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d6fe568350884c92818db2319fb514da 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 [2024-05-15 02:34:30.323358] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:43.084 [2024-05-15 02:34:30.323617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.084 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.342 nvme0n1 00:17:43.342 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.342 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.342 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.342 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.342 [ 00:17:43.342 { 00:17:43.342 "name": "nvme0n1", 00:17:43.342 "aliases": [ 00:17:43.342 "d6fe5683-5088-4c92-818d-b2319fb514da" 00:17:43.342 ], 00:17:43.342 "product_name": "NVMe disk", 00:17:43.342 "block_size": 512, 00:17:43.342 "num_blocks": 2097152, 00:17:43.342 "uuid": "d6fe5683-5088-4c92-818d-b2319fb514da", 00:17:43.342 "assigned_rate_limits": { 00:17:43.342 "rw_ios_per_sec": 0, 00:17:43.342 "rw_mbytes_per_sec": 0, 00:17:43.342 "r_mbytes_per_sec": 0, 00:17:43.342 "w_mbytes_per_sec": 0 00:17:43.342 }, 00:17:43.342 "claimed": false, 00:17:43.342 "zoned": false, 00:17:43.342 "supported_io_types": { 00:17:43.342 "read": true, 00:17:43.342 "write": true, 00:17:43.342 "unmap": false, 00:17:43.342 "write_zeroes": true, 00:17:43.342 "flush": true, 00:17:43.342 "reset": true, 00:17:43.342 "compare": true, 00:17:43.342 "compare_and_write": true, 00:17:43.342 "abort": true, 00:17:43.342 "nvme_admin": true, 00:17:43.342 "nvme_io": true 00:17:43.342 }, 00:17:43.342 "memory_domains": [ 00:17:43.342 { 00:17:43.342 "dma_device_id": "system", 00:17:43.342 "dma_device_type": 1 00:17:43.342 } 00:17:43.342 ], 00:17:43.342 "driver_specific": { 00:17:43.342 "nvme": [ 00:17:43.342 { 00:17:43.342 "trid": { 00:17:43.342 "trtype": "TCP", 00:17:43.342 "adrfam": "IPv4", 00:17:43.342 "traddr": "10.0.0.2", 00:17:43.342 "trsvcid": "4420", 00:17:43.342 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.342 }, 00:17:43.342 "ctrlr_data": { 00:17:43.342 "cntlid": 1, 00:17:43.342 "vendor_id": "0x8086", 00:17:43.342 "model_number": "SPDK bdev Controller", 00:17:43.342 "serial_number": "00000000000000000000", 00:17:43.342 "firmware_revision": "24.05", 00:17:43.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.343 "oacs": { 00:17:43.343 "security": 0, 00:17:43.343 "format": 0, 00:17:43.343 "firmware": 0, 00:17:43.343 "ns_manage": 0 00:17:43.343 }, 00:17:43.343 "multi_ctrlr": true, 00:17:43.343 "ana_reporting": false 00:17:43.343 }, 00:17:43.343 "vs": { 00:17:43.343 "nvme_version": "1.3" 00:17:43.343 }, 00:17:43.343 "ns_data": { 00:17:43.343 "id": 1, 00:17:43.343 "can_share": true 00:17:43.343 } 00:17:43.343 } 00:17:43.343 ], 00:17:43.343 "mp_policy": "active_passive" 00:17:43.343 } 00:17:43.343 } 00:17:43.343 ] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 [2024-05-15 02:34:30.572874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:43.343 [2024-05-15 02:34:30.572977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8cb20 (9): Bad file descriptor 00:17:43.343 [2024-05-15 02:34:30.705090] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 [ 00:17:43.343 { 00:17:43.343 "name": "nvme0n1", 00:17:43.343 "aliases": [ 00:17:43.343 "d6fe5683-5088-4c92-818d-b2319fb514da" 00:17:43.343 ], 00:17:43.343 "product_name": "NVMe disk", 00:17:43.343 "block_size": 512, 00:17:43.343 "num_blocks": 2097152, 00:17:43.343 "uuid": "d6fe5683-5088-4c92-818d-b2319fb514da", 00:17:43.343 "assigned_rate_limits": { 00:17:43.343 "rw_ios_per_sec": 0, 00:17:43.343 "rw_mbytes_per_sec": 0, 00:17:43.343 "r_mbytes_per_sec": 0, 00:17:43.343 "w_mbytes_per_sec": 0 00:17:43.343 }, 00:17:43.343 "claimed": false, 00:17:43.343 "zoned": false, 00:17:43.343 "supported_io_types": { 00:17:43.343 "read": true, 00:17:43.343 "write": true, 00:17:43.343 "unmap": false, 00:17:43.343 "write_zeroes": true, 00:17:43.343 "flush": true, 00:17:43.343 "reset": true, 00:17:43.343 "compare": true, 00:17:43.343 "compare_and_write": true, 00:17:43.343 "abort": true, 00:17:43.343 "nvme_admin": true, 00:17:43.343 "nvme_io": true 00:17:43.343 }, 00:17:43.343 "memory_domains": [ 00:17:43.343 { 00:17:43.343 "dma_device_id": "system", 00:17:43.343 "dma_device_type": 1 00:17:43.343 } 00:17:43.343 ], 00:17:43.343 "driver_specific": { 00:17:43.343 "nvme": [ 00:17:43.343 { 00:17:43.343 "trid": { 00:17:43.343 "trtype": "TCP", 00:17:43.343 "adrfam": "IPv4", 00:17:43.343 "traddr": "10.0.0.2", 00:17:43.343 "trsvcid": "4420", 00:17:43.343 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.343 }, 00:17:43.343 "ctrlr_data": { 00:17:43.343 "cntlid": 2, 00:17:43.343 "vendor_id": "0x8086", 00:17:43.343 "model_number": "SPDK bdev Controller", 00:17:43.343 "serial_number": "00000000000000000000", 00:17:43.343 "firmware_revision": "24.05", 00:17:43.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.343 "oacs": { 00:17:43.343 "security": 0, 00:17:43.343 "format": 0, 00:17:43.343 "firmware": 0, 00:17:43.343 "ns_manage": 0 00:17:43.343 }, 00:17:43.343 "multi_ctrlr": true, 00:17:43.343 "ana_reporting": false 00:17:43.343 }, 00:17:43.343 "vs": { 00:17:43.343 "nvme_version": "1.3" 00:17:43.343 }, 00:17:43.343 "ns_data": { 00:17:43.343 "id": 1, 00:17:43.343 "can_share": true 00:17:43.343 } 00:17:43.343 } 00:17:43.343 ], 00:17:43.343 "mp_policy": "active_passive" 00:17:43.343 } 00:17:43.343 } 00:17:43.343 ] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.krELyt0Hv8 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.krELyt0Hv8 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 [2024-05-15 02:34:30.749498] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.343 [2024-05-15 02:34:30.749629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krELyt0Hv8 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.601 [2024-05-15 02:34:30.757515] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krELyt0Hv8 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.601 [2024-05-15 02:34:30.765528] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.601 [2024-05-15 02:34:30.765591] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:43.601 nvme0n1 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.601 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.602 [ 00:17:43.602 { 00:17:43.602 "name": "nvme0n1", 00:17:43.602 "aliases": [ 00:17:43.602 "d6fe5683-5088-4c92-818d-b2319fb514da" 00:17:43.602 ], 00:17:43.602 "product_name": "NVMe disk", 00:17:43.602 "block_size": 512, 00:17:43.602 "num_blocks": 2097152, 00:17:43.602 "uuid": "d6fe5683-5088-4c92-818d-b2319fb514da", 00:17:43.602 "assigned_rate_limits": { 00:17:43.602 "rw_ios_per_sec": 0, 00:17:43.602 "rw_mbytes_per_sec": 0, 00:17:43.602 "r_mbytes_per_sec": 0, 00:17:43.602 "w_mbytes_per_sec": 0 00:17:43.602 }, 00:17:43.602 "claimed": false, 00:17:43.602 "zoned": false, 00:17:43.602 "supported_io_types": { 00:17:43.602 "read": true, 00:17:43.602 "write": true, 00:17:43.602 "unmap": false, 00:17:43.602 "write_zeroes": true, 00:17:43.602 "flush": true, 00:17:43.602 "reset": true, 00:17:43.602 "compare": true, 00:17:43.602 "compare_and_write": true, 00:17:43.602 "abort": true, 00:17:43.602 "nvme_admin": true, 00:17:43.602 "nvme_io": true 00:17:43.602 }, 00:17:43.602 "memory_domains": [ 00:17:43.602 { 00:17:43.602 "dma_device_id": "system", 00:17:43.602 "dma_device_type": 1 00:17:43.602 } 00:17:43.602 ], 00:17:43.602 "driver_specific": { 00:17:43.602 "nvme": [ 00:17:43.602 { 00:17:43.602 "trid": { 00:17:43.602 "trtype": "TCP", 00:17:43.602 "adrfam": "IPv4", 00:17:43.602 "traddr": "10.0.0.2", 00:17:43.602 "trsvcid": "4421", 00:17:43.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.602 }, 00:17:43.602 "ctrlr_data": { 00:17:43.602 "cntlid": 3, 00:17:43.602 "vendor_id": "0x8086", 00:17:43.602 "model_number": "SPDK bdev Controller", 00:17:43.602 "serial_number": "00000000000000000000", 00:17:43.602 "firmware_revision": "24.05", 00:17:43.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.602 "oacs": { 00:17:43.602 "security": 0, 00:17:43.602 "format": 0, 00:17:43.602 "firmware": 0, 00:17:43.602 "ns_manage": 0 00:17:43.602 }, 00:17:43.602 "multi_ctrlr": true, 00:17:43.602 "ana_reporting": false 00:17:43.602 }, 00:17:43.602 "vs": { 00:17:43.602 "nvme_version": "1.3" 00:17:43.602 }, 00:17:43.602 "ns_data": { 00:17:43.602 "id": 1, 00:17:43.602 "can_share": true 00:17:43.602 } 00:17:43.602 } 00:17:43.602 ], 00:17:43.602 "mp_policy": "active_passive" 00:17:43.602 } 00:17:43.602 } 00:17:43.602 ] 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.krELyt0Hv8 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.602 rmmod nvme_tcp 00:17:43.602 rmmod nvme_fabrics 00:17:43.602 rmmod nvme_keyring 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2345040 ']' 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2345040 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 2345040 ']' 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 2345040 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2345040 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2345040' 00:17:43.602 killing process with pid 2345040 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 2345040 00:17:43.602 [2024-05-15 02:34:30.960900] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:43.602 [2024-05-15 02:34:30.960959] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:43.602 [2024-05-15 02:34:30.960992] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:43.602 02:34:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 2345040 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.861 02:34:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.393 02:34:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.393 00:17:46.393 real 0m6.631s 00:17:46.393 user 0m3.173s 00:17:46.393 sys 0m2.110s 00:17:46.393 02:34:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:46.393 02:34:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:46.393 ************************************ 00:17:46.393 END TEST nvmf_async_init 00:17:46.393 ************************************ 00:17:46.393 02:34:33 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:46.393 02:34:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:46.393 02:34:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:46.393 02:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.393 ************************************ 00:17:46.393 START TEST dma 00:17:46.393 ************************************ 00:17:46.393 02:34:33 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:46.393 * Looking for test storage... 00:17:46.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.393 02:34:33 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.393 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.393 02:34:33 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.393 02:34:33 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.393 02:34:33 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.393 02:34:33 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.393 02:34:33 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:46.394 02:34:33 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.394 02:34:33 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.394 02:34:33 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:46.394 02:34:33 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:46.394 00:17:46.394 real 0m0.070s 00:17:46.394 user 0m0.030s 00:17:46.394 sys 0m0.045s 00:17:46.394 02:34:33 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:46.394 02:34:33 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:46.394 ************************************ 00:17:46.394 END TEST dma 00:17:46.394 ************************************ 00:17:46.394 02:34:33 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:46.394 02:34:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:46.394 02:34:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:46.394 02:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.394 ************************************ 00:17:46.394 START TEST nvmf_identify 00:17:46.394 ************************************ 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:46.394 * Looking for test storage... 00:17:46.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.394 02:34:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:48.962 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:48.962 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:48.962 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.962 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:48.963 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:17:48.963 00:17:48.963 --- 10.0.0.2 ping statistics --- 00:17:48.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.963 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:17:48.963 00:17:48.963 --- 10.0.0.1 ping statistics --- 00:17:48.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.963 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2347590 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2347590 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 2347590 ']' 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:48.963 02:34:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.963 [2024-05-15 02:34:36.047037] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:48.963 [2024-05-15 02:34:36.047119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.963 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.963 [2024-05-15 02:34:36.133112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.963 [2024-05-15 02:34:36.246626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.963 [2024-05-15 02:34:36.246689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.963 [2024-05-15 02:34:36.246706] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.963 [2024-05-15 02:34:36.246720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.963 [2024-05-15 02:34:36.246732] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.963 [2024-05-15 02:34:36.246817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.963 [2024-05-15 02:34:36.246871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.963 [2024-05-15 02:34:36.246999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.963 [2024-05-15 02:34:36.247002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 [2024-05-15 02:34:37.039099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 Malloc0 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 [2024-05-15 02:34:37.120510] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:49.899 [2024-05-15 02:34:37.120845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 [ 00:17:49.899 { 00:17:49.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:49.899 "subtype": "Discovery", 00:17:49.899 "listen_addresses": [ 00:17:49.899 { 00:17:49.899 "trtype": "TCP", 00:17:49.899 "adrfam": "IPv4", 00:17:49.899 "traddr": "10.0.0.2", 00:17:49.899 "trsvcid": "4420" 00:17:49.899 } 00:17:49.899 ], 00:17:49.899 "allow_any_host": true, 00:17:49.899 "hosts": [] 00:17:49.899 }, 00:17:49.899 { 00:17:49.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.899 "subtype": "NVMe", 00:17:49.899 "listen_addresses": [ 00:17:49.899 { 00:17:49.899 "trtype": "TCP", 00:17:49.899 "adrfam": "IPv4", 00:17:49.899 "traddr": "10.0.0.2", 00:17:49.899 "trsvcid": "4420" 00:17:49.899 } 00:17:49.899 ], 00:17:49.899 "allow_any_host": true, 00:17:49.899 "hosts": [], 00:17:49.899 "serial_number": "SPDK00000000000001", 00:17:49.899 "model_number": "SPDK bdev Controller", 00:17:49.899 "max_namespaces": 32, 00:17:49.899 "min_cntlid": 1, 00:17:49.899 "max_cntlid": 65519, 00:17:49.899 "namespaces": [ 00:17:49.899 { 00:17:49.899 "nsid": 1, 00:17:49.899 "bdev_name": "Malloc0", 00:17:49.899 "name": "Malloc0", 00:17:49.899 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:49.899 "eui64": "ABCDEF0123456789", 00:17:49.899 "uuid": "14b94394-fcc4-4f00-85eb-23021ece7873" 00:17:49.899 } 00:17:49.899 ] 00:17:49.899 } 00:17:49.899 ] 00:17:49.899 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.900 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:49.900 [2024-05-15 02:34:37.159590] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:49.900 [2024-05-15 02:34:37.159627] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347747 ] 00:17:49.900 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.900 [2024-05-15 02:34:37.191256] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:49.900 [2024-05-15 02:34:37.191311] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:49.900 [2024-05-15 02:34:37.191321] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:49.900 [2024-05-15 02:34:37.191336] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:49.900 [2024-05-15 02:34:37.191348] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:49.900 [2024-05-15 02:34:37.195000] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:49.900 [2024-05-15 02:34:37.195052] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b7fc80 0 00:17:49.900 [2024-05-15 02:34:37.201943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:49.900 [2024-05-15 02:34:37.201966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:49.900 [2024-05-15 02:34:37.201981] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:49.900 [2024-05-15 02:34:37.201989] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:49.900 [2024-05-15 02:34:37.202043] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.202057] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.202067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.202086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:49.900 [2024-05-15 02:34:37.202116] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.209963] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.209981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.209989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.209997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.210020] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:49.900 [2024-05-15 02:34:37.210049] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:49.900 [2024-05-15 02:34:37.210060] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:49.900 [2024-05-15 02:34:37.210083] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.210113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.900 [2024-05-15 02:34:37.210138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.210346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.210360] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.210368] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.210393] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:49.900 [2024-05-15 02:34:37.210408] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:49.900 [2024-05-15 02:34:37.210421] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210445] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210453] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.210465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.900 [2024-05-15 02:34:37.210487] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.210706] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.210719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.210726] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.210746] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:49.900 [2024-05-15 02:34:37.210761] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:49.900 [2024-05-15 02:34:37.210774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.210790] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.210815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.900 [2024-05-15 02:34:37.210837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.211029] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.211047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.211055] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211063] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.211075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:49.900 [2024-05-15 02:34:37.211092] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211101] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211107] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.211118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.900 [2024-05-15 02:34:37.211139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.211346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.211361] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.211368] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211375] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.211385] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:49.900 [2024-05-15 02:34:37.211398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:49.900 [2024-05-15 02:34:37.211412] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:49.900 [2024-05-15 02:34:37.211523] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:49.900 [2024-05-15 02:34:37.211531] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:49.900 [2024-05-15 02:34:37.211546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.211570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.900 [2024-05-15 02:34:37.211591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.211813] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.211829] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.211836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211842] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.211852] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:49.900 [2024-05-15 02:34:37.211869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211878] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.211884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.900 [2024-05-15 02:34:37.211895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.900 [2024-05-15 02:34:37.211938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.900 [2024-05-15 02:34:37.212145] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.900 [2024-05-15 02:34:37.212158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.900 [2024-05-15 02:34:37.212164] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.900 [2024-05-15 02:34:37.212171] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.900 [2024-05-15 02:34:37.212180] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:49.900 [2024-05-15 02:34:37.212189] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:49.900 [2024-05-15 02:34:37.212202] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:49.900 [2024-05-15 02:34:37.212217] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:49.900 [2024-05-15 02:34:37.212232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.212251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.901 [2024-05-15 02:34:37.212291] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.901 [2024-05-15 02:34:37.212509] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.901 [2024-05-15 02:34:37.212524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.901 [2024-05-15 02:34:37.212532] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212539] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc80): datao=0, datal=4096, cccid=0 00:17:49.901 [2024-05-15 02:34:37.212546] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bdee40) on tqpair(0x1b7fc80): expected_datao=0, payload_size=4096 00:17:49.901 [2024-05-15 02:34:37.212554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212605] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212615] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212768] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.901 [2024-05-15 02:34:37.212783] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.901 [2024-05-15 02:34:37.212790] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212796] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.901 [2024-05-15 02:34:37.212810] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:49.901 [2024-05-15 02:34:37.212819] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:49.901 [2024-05-15 02:34:37.212827] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:49.901 [2024-05-15 02:34:37.212835] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:49.901 [2024-05-15 02:34:37.212842] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:49.901 [2024-05-15 02:34:37.212851] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:49.901 [2024-05-15 02:34:37.212870] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:49.901 [2024-05-15 02:34:37.212887] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212895] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.212918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.212936] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.901 [2024-05-15 02:34:37.212959] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.901 [2024-05-15 02:34:37.213146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.901 [2024-05-15 02:34:37.213162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.901 [2024-05-15 02:34:37.213169] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdee40) on tqpair=0x1b7fc80 00:17:49.901 [2024-05-15 02:34:37.213190] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213203] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.213214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.901 [2024-05-15 02:34:37.213224] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213241] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.213250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.901 [2024-05-15 02:34:37.213260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213289] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.213298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.901 [2024-05-15 02:34:37.213307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213314] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213320] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.213328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.901 [2024-05-15 02:34:37.213337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:49.901 [2024-05-15 02:34:37.213356] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:49.901 [2024-05-15 02:34:37.213368] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.213385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.901 [2024-05-15 02:34:37.213407] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdee40, cid 0, qid 0 00:17:49.901 [2024-05-15 02:34:37.213434] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdefa0, cid 1, qid 0 00:17:49.901 [2024-05-15 02:34:37.213442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf100, cid 2, qid 0 00:17:49.901 [2024-05-15 02:34:37.213450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.901 [2024-05-15 02:34:37.213457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf3c0, cid 4, qid 0 00:17:49.901 [2024-05-15 02:34:37.213661] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.901 [2024-05-15 02:34:37.213676] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.901 [2024-05-15 02:34:37.213683] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf3c0) on tqpair=0x1b7fc80 00:17:49.901 [2024-05-15 02:34:37.213701] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:49.901 [2024-05-15 02:34:37.213710] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:49.901 [2024-05-15 02:34:37.213744] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.213753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.213763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.901 [2024-05-15 02:34:37.213784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf3c0, cid 4, qid 0 00:17:49.901 [2024-05-15 02:34:37.217941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.901 [2024-05-15 02:34:37.217958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.901 [2024-05-15 02:34:37.217969] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.217976] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc80): datao=0, datal=4096, cccid=4 00:17:49.901 [2024-05-15 02:34:37.217984] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bdf3c0) on tqpair(0x1b7fc80): expected_datao=0, payload_size=4096 00:17:49.901 [2024-05-15 02:34:37.217991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218001] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218008] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.901 [2024-05-15 02:34:37.218025] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.901 [2024-05-15 02:34:37.218032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218038] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf3c0) on tqpair=0x1b7fc80 00:17:49.901 [2024-05-15 02:34:37.218076] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:49.901 [2024-05-15 02:34:37.218117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218127] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.218139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.901 [2024-05-15 02:34:37.218150] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218157] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218164] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7fc80) 00:17:49.901 [2024-05-15 02:34:37.218173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.901 [2024-05-15 02:34:37.218200] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf3c0, cid 4, qid 0 00:17:49.901 [2024-05-15 02:34:37.218212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf520, cid 5, qid 0 00:17:49.901 [2024-05-15 02:34:37.218469] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.901 [2024-05-15 02:34:37.218481] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.901 [2024-05-15 02:34:37.218488] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218494] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc80): datao=0, datal=1024, cccid=4 00:17:49.901 [2024-05-15 02:34:37.218502] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bdf3c0) on tqpair(0x1b7fc80): expected_datao=0, payload_size=1024 00:17:49.901 [2024-05-15 02:34:37.218509] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218519] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218526] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.901 [2024-05-15 02:34:37.218544] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.901 [2024-05-15 02:34:37.218550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.901 [2024-05-15 02:34:37.218557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf520) on tqpair=0x1b7fc80 00:17:49.901 [2024-05-15 02:34:37.259160] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.902 [2024-05-15 02:34:37.259178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.902 [2024-05-15 02:34:37.259186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259193] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf3c0) on tqpair=0x1b7fc80 00:17:49.902 [2024-05-15 02:34:37.259217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc80) 00:17:49.902 [2024-05-15 02:34:37.259239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.902 [2024-05-15 02:34:37.259268] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf3c0, cid 4, qid 0 00:17:49.902 [2024-05-15 02:34:37.259523] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.902 [2024-05-15 02:34:37.259539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.902 [2024-05-15 02:34:37.259546] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259552] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc80): datao=0, datal=3072, cccid=4 00:17:49.902 [2024-05-15 02:34:37.259560] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bdf3c0) on tqpair(0x1b7fc80): expected_datao=0, payload_size=3072 00:17:49.902 [2024-05-15 02:34:37.259582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259622] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259645] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.902 [2024-05-15 02:34:37.259808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.902 [2024-05-15 02:34:37.259815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf3c0) on tqpair=0x1b7fc80 00:17:49.902 [2024-05-15 02:34:37.259839] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.259847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc80) 00:17:49.902 [2024-05-15 02:34:37.259858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.902 [2024-05-15 02:34:37.259885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf3c0, cid 4, qid 0 00:17:49.902 [2024-05-15 02:34:37.260086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.902 [2024-05-15 02:34:37.260101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.902 [2024-05-15 02:34:37.260108] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.260115] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc80): datao=0, datal=8, cccid=4 00:17:49.902 [2024-05-15 02:34:37.260122] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bdf3c0) on tqpair(0x1b7fc80): expected_datao=0, payload_size=8 00:17:49.902 [2024-05-15 02:34:37.260130] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.260139] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.260147] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.304943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.902 [2024-05-15 02:34:37.304961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.902 [2024-05-15 02:34:37.304968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.902 [2024-05-15 02:34:37.304975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf3c0) on tqpair=0x1b7fc80 00:17:49.902 ===================================================== 00:17:49.902 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:49.902 ===================================================== 00:17:49.902 Controller Capabilities/Features 00:17:49.902 ================================ 00:17:49.902 Vendor ID: 0000 00:17:49.902 Subsystem Vendor ID: 0000 00:17:49.902 Serial Number: .................... 00:17:49.902 Model Number: ........................................ 00:17:49.902 Firmware Version: 24.05 00:17:49.902 Recommended Arb Burst: 0 00:17:49.902 IEEE OUI Identifier: 00 00 00 00:17:49.902 Multi-path I/O 00:17:49.902 May have multiple subsystem ports: No 00:17:49.902 May have multiple controllers: No 00:17:49.902 Associated with SR-IOV VF: No 00:17:49.902 Max Data Transfer Size: 131072 00:17:49.902 Max Number of Namespaces: 0 00:17:49.902 Max Number of I/O Queues: 1024 00:17:49.902 NVMe Specification Version (VS): 1.3 00:17:49.902 NVMe Specification Version (Identify): 1.3 00:17:49.902 Maximum Queue Entries: 128 00:17:49.902 Contiguous Queues Required: Yes 00:17:49.902 Arbitration Mechanisms Supported 00:17:49.902 Weighted Round Robin: Not Supported 00:17:49.902 Vendor Specific: Not Supported 00:17:49.902 Reset Timeout: 15000 ms 00:17:49.902 Doorbell Stride: 4 bytes 00:17:49.902 NVM Subsystem Reset: Not Supported 00:17:49.902 Command Sets Supported 00:17:49.902 NVM Command Set: Supported 00:17:49.902 Boot Partition: Not Supported 00:17:49.902 Memory Page Size Minimum: 4096 bytes 00:17:49.902 Memory Page Size Maximum: 4096 bytes 00:17:49.902 Persistent Memory Region: Not Supported 00:17:49.902 Optional Asynchronous Events Supported 00:17:49.902 Namespace Attribute Notices: Not Supported 00:17:49.902 Firmware Activation Notices: Not Supported 00:17:49.902 ANA Change Notices: Not Supported 00:17:49.902 PLE Aggregate Log Change Notices: Not Supported 00:17:49.902 LBA Status Info Alert Notices: Not Supported 00:17:49.902 EGE Aggregate Log Change Notices: Not Supported 00:17:49.902 Normal NVM Subsystem Shutdown event: Not Supported 00:17:49.902 Zone Descriptor Change Notices: Not Supported 00:17:49.902 Discovery Log Change Notices: Supported 00:17:49.902 Controller Attributes 00:17:49.902 128-bit Host Identifier: Not Supported 00:17:49.902 Non-Operational Permissive Mode: Not Supported 00:17:49.902 NVM Sets: Not Supported 00:17:49.902 Read Recovery Levels: Not Supported 00:17:49.902 Endurance Groups: Not Supported 00:17:49.902 Predictable Latency Mode: Not Supported 00:17:49.902 Traffic Based Keep ALive: Not Supported 00:17:49.902 Namespace Granularity: Not Supported 00:17:49.902 SQ Associations: Not Supported 00:17:49.902 UUID List: Not Supported 00:17:49.902 Multi-Domain Subsystem: Not Supported 00:17:49.902 Fixed Capacity Management: Not Supported 00:17:49.902 Variable Capacity Management: Not Supported 00:17:49.902 Delete Endurance Group: Not Supported 00:17:49.902 Delete NVM Set: Not Supported 00:17:49.902 Extended LBA Formats Supported: Not Supported 00:17:49.902 Flexible Data Placement Supported: Not Supported 00:17:49.902 00:17:49.902 Controller Memory Buffer Support 00:17:49.902 ================================ 00:17:49.902 Supported: No 00:17:49.902 00:17:49.902 Persistent Memory Region Support 00:17:49.902 ================================ 00:17:49.902 Supported: No 00:17:49.902 00:17:49.902 Admin Command Set Attributes 00:17:49.902 ============================ 00:17:49.902 Security Send/Receive: Not Supported 00:17:49.902 Format NVM: Not Supported 00:17:49.902 Firmware Activate/Download: Not Supported 00:17:49.902 Namespace Management: Not Supported 00:17:49.902 Device Self-Test: Not Supported 00:17:49.902 Directives: Not Supported 00:17:49.902 NVMe-MI: Not Supported 00:17:49.902 Virtualization Management: Not Supported 00:17:49.902 Doorbell Buffer Config: Not Supported 00:17:49.902 Get LBA Status Capability: Not Supported 00:17:49.902 Command & Feature Lockdown Capability: Not Supported 00:17:49.902 Abort Command Limit: 1 00:17:49.902 Async Event Request Limit: 4 00:17:49.902 Number of Firmware Slots: N/A 00:17:49.902 Firmware Slot 1 Read-Only: N/A 00:17:49.902 Firmware Activation Without Reset: N/A 00:17:49.902 Multiple Update Detection Support: N/A 00:17:49.902 Firmware Update Granularity: No Information Provided 00:17:49.902 Per-Namespace SMART Log: No 00:17:49.902 Asymmetric Namespace Access Log Page: Not Supported 00:17:49.902 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:49.902 Command Effects Log Page: Not Supported 00:17:49.902 Get Log Page Extended Data: Supported 00:17:49.902 Telemetry Log Pages: Not Supported 00:17:49.902 Persistent Event Log Pages: Not Supported 00:17:49.902 Supported Log Pages Log Page: May Support 00:17:49.902 Commands Supported & Effects Log Page: Not Supported 00:17:49.902 Feature Identifiers & Effects Log Page:May Support 00:17:49.902 NVMe-MI Commands & Effects Log Page: May Support 00:17:49.902 Data Area 4 for Telemetry Log: Not Supported 00:17:49.902 Error Log Page Entries Supported: 128 00:17:49.902 Keep Alive: Not Supported 00:17:49.902 00:17:49.902 NVM Command Set Attributes 00:17:49.902 ========================== 00:17:49.902 Submission Queue Entry Size 00:17:49.902 Max: 1 00:17:49.902 Min: 1 00:17:49.902 Completion Queue Entry Size 00:17:49.902 Max: 1 00:17:49.902 Min: 1 00:17:49.902 Number of Namespaces: 0 00:17:49.902 Compare Command: Not Supported 00:17:49.902 Write Uncorrectable Command: Not Supported 00:17:49.902 Dataset Management Command: Not Supported 00:17:49.902 Write Zeroes Command: Not Supported 00:17:49.902 Set Features Save Field: Not Supported 00:17:49.902 Reservations: Not Supported 00:17:49.902 Timestamp: Not Supported 00:17:49.902 Copy: Not Supported 00:17:49.902 Volatile Write Cache: Not Present 00:17:49.902 Atomic Write Unit (Normal): 1 00:17:49.902 Atomic Write Unit (PFail): 1 00:17:49.902 Atomic Compare & Write Unit: 1 00:17:49.902 Fused Compare & Write: Supported 00:17:49.902 Scatter-Gather List 00:17:49.902 SGL Command Set: Supported 00:17:49.902 SGL Keyed: Supported 00:17:49.902 SGL Bit Bucket Descriptor: Not Supported 00:17:49.902 SGL Metadata Pointer: Not Supported 00:17:49.902 Oversized SGL: Not Supported 00:17:49.903 SGL Metadata Address: Not Supported 00:17:49.903 SGL Offset: Supported 00:17:49.903 Transport SGL Data Block: Not Supported 00:17:49.903 Replay Protected Memory Block: Not Supported 00:17:49.903 00:17:49.903 Firmware Slot Information 00:17:49.903 ========================= 00:17:49.903 Active slot: 0 00:17:49.903 00:17:49.903 00:17:49.903 Error Log 00:17:49.903 ========= 00:17:49.903 00:17:49.903 Active Namespaces 00:17:49.903 ================= 00:17:49.903 Discovery Log Page 00:17:49.903 ================== 00:17:49.903 Generation Counter: 2 00:17:49.903 Number of Records: 2 00:17:49.903 Record Format: 0 00:17:49.903 00:17:49.903 Discovery Log Entry 0 00:17:49.903 ---------------------- 00:17:49.903 Transport Type: 3 (TCP) 00:17:49.903 Address Family: 1 (IPv4) 00:17:49.903 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:49.903 Entry Flags: 00:17:49.903 Duplicate Returned Information: 1 00:17:49.903 Explicit Persistent Connection Support for Discovery: 1 00:17:49.903 Transport Requirements: 00:17:49.903 Secure Channel: Not Required 00:17:49.903 Port ID: 0 (0x0000) 00:17:49.903 Controller ID: 65535 (0xffff) 00:17:49.903 Admin Max SQ Size: 128 00:17:49.903 Transport Service Identifier: 4420 00:17:49.903 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:49.903 Transport Address: 10.0.0.2 00:17:49.903 Discovery Log Entry 1 00:17:49.903 ---------------------- 00:17:49.903 Transport Type: 3 (TCP) 00:17:49.903 Address Family: 1 (IPv4) 00:17:49.903 Subsystem Type: 2 (NVM Subsystem) 00:17:49.903 Entry Flags: 00:17:49.903 Duplicate Returned Information: 0 00:17:49.903 Explicit Persistent Connection Support for Discovery: 0 00:17:49.903 Transport Requirements: 00:17:49.903 Secure Channel: Not Required 00:17:49.903 Port ID: 0 (0x0000) 00:17:49.903 Controller ID: 65535 (0xffff) 00:17:49.903 Admin Max SQ Size: 128 00:17:49.903 Transport Service Identifier: 4420 00:17:49.903 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:49.903 Transport Address: 10.0.0.2 [2024-05-15 02:34:37.305109] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:49.903 [2024-05-15 02:34:37.305136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.903 [2024-05-15 02:34:37.305148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.903 [2024-05-15 02:34:37.305158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.903 [2024-05-15 02:34:37.305171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.903 [2024-05-15 02:34:37.305186] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305194] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.305212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.305243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.305452] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.305467] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.305474] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305481] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.305494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305524] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.305534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.305561] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.305793] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.305808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.305815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.305832] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:49.903 [2024-05-15 02:34:37.305841] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:49.903 [2024-05-15 02:34:37.305857] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305866] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.305888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.305898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.305919] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.306108] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.306124] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.306131] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.306157] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306173] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.306183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.306204] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.306364] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.306377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.306384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.306408] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306417] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306423] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.306434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.306454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.306669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.306685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.306692] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306699] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.306717] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306726] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.306744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.306765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.306971] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.306985] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.306992] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.306998] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.307015] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.307025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.307031] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.307041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.307062] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.307232] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.307247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.307254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.307261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.903 [2024-05-15 02:34:37.307278] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.307287] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.307294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.903 [2024-05-15 02:34:37.307304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.903 [2024-05-15 02:34:37.307324] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.903 [2024-05-15 02:34:37.307483] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.903 [2024-05-15 02:34:37.307498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.903 [2024-05-15 02:34:37.307505] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.903 [2024-05-15 02:34:37.307511] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.904 [2024-05-15 02:34:37.307529] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.307538] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.307545] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.904 [2024-05-15 02:34:37.307555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.904 [2024-05-15 02:34:37.307576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.904 [2024-05-15 02:34:37.307781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.904 [2024-05-15 02:34:37.307796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.904 [2024-05-15 02:34:37.307803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.307810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.904 [2024-05-15 02:34:37.307827] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.307836] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.307843] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.904 [2024-05-15 02:34:37.307853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.904 [2024-05-15 02:34:37.307874] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.904 [2024-05-15 02:34:37.308078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.904 [2024-05-15 02:34:37.308092] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.904 [2024-05-15 02:34:37.308099] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.904 [2024-05-15 02:34:37.308123] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308132] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308138] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.904 [2024-05-15 02:34:37.308149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.904 [2024-05-15 02:34:37.308169] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.904 [2024-05-15 02:34:37.308327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.904 [2024-05-15 02:34:37.308342] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.904 [2024-05-15 02:34:37.308348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308355] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.904 [2024-05-15 02:34:37.308373] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308382] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.904 [2024-05-15 02:34:37.308399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.904 [2024-05-15 02:34:37.308419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:49.904 [2024-05-15 02:34:37.308623] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.904 [2024-05-15 02:34:37.308638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.904 [2024-05-15 02:34:37.308646] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308652] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:49.904 [2024-05-15 02:34:37.308670] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.904 [2024-05-15 02:34:37.308685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:49.904 [2024-05-15 02:34:37.308696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.904 [2024-05-15 02:34:37.308716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:50.164 [2024-05-15 02:34:37.308926] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.164 [2024-05-15 02:34:37.312966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.164 [2024-05-15 02:34:37.312975] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.164 [2024-05-15 02:34:37.312982] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:50.164 [2024-05-15 02:34:37.313002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.164 [2024-05-15 02:34:37.313012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.164 [2024-05-15 02:34:37.313019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc80) 00:17:50.164 [2024-05-15 02:34:37.313029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.164 [2024-05-15 02:34:37.313051] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bdf260, cid 3, qid 0 00:17:50.164 [2024-05-15 02:34:37.313257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.164 [2024-05-15 02:34:37.313269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.164 [2024-05-15 02:34:37.313276] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.164 [2024-05-15 02:34:37.313283] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1bdf260) on tqpair=0x1b7fc80 00:17:50.165 [2024-05-15 02:34:37.313296] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:17:50.165 00:17:50.165 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:50.165 [2024-05-15 02:34:37.344016] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:50.165 [2024-05-15 02:34:37.344057] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347749 ] 00:17:50.165 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.165 [2024-05-15 02:34:37.374598] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:50.165 [2024-05-15 02:34:37.374643] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.165 [2024-05-15 02:34:37.374653] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.165 [2024-05-15 02:34:37.374665] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.165 [2024-05-15 02:34:37.374676] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.165 [2024-05-15 02:34:37.377983] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:50.165 [2024-05-15 02:34:37.378037] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x84cc80 0 00:17:50.165 [2024-05-15 02:34:37.385955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.165 [2024-05-15 02:34:37.385973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.165 [2024-05-15 02:34:37.385985] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.165 [2024-05-15 02:34:37.385992] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.165 [2024-05-15 02:34:37.386047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.386060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.386067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.386080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.165 [2024-05-15 02:34:37.386107] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.393962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.393981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.393989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.393996] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.394009] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:50.165 [2024-05-15 02:34:37.394020] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:50.165 [2024-05-15 02:34:37.394029] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:50.165 [2024-05-15 02:34:37.394046] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394055] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.394073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.394097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.394266] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.394282] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.394289] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394296] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.394304] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:50.165 [2024-05-15 02:34:37.394317] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:50.165 [2024-05-15 02:34:37.394329] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394337] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.394354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.394375] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.394531] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.394546] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.394557] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394565] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.394573] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:50.165 [2024-05-15 02:34:37.394588] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.165 [2024-05-15 02:34:37.394600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.394625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.394646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.394804] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.394819] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.394826] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394833] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.394841] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.165 [2024-05-15 02:34:37.394858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.394874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.394884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.394905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.395069] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.395082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.395089] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.395103] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:50.165 [2024-05-15 02:34:37.395111] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:50.165 [2024-05-15 02:34:37.395124] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.165 [2024-05-15 02:34:37.395234] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:50.165 [2024-05-15 02:34:37.395241] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.165 [2024-05-15 02:34:37.395268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395275] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.395292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.395313] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.395491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.395506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.395513] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.395528] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.165 [2024-05-15 02:34:37.395545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.395572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.395593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.395746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.395761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.395768] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.395782] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.165 [2024-05-15 02:34:37.395791] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:50.165 [2024-05-15 02:34:37.395804] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:50.165 [2024-05-15 02:34:37.395818] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.165 [2024-05-15 02:34:37.395832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.395840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.395851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.165 [2024-05-15 02:34:37.395872] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.396092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.165 [2024-05-15 02:34:37.396108] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.165 [2024-05-15 02:34:37.396115] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396121] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=4096, cccid=0 00:17:50.165 [2024-05-15 02:34:37.396129] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8abe40) on tqpair(0x84cc80): expected_datao=0, payload_size=4096 00:17:50.165 [2024-05-15 02:34:37.396137] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396147] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396155] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396215] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.396227] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.396233] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396240] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.165 [2024-05-15 02:34:37.396250] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:50.165 [2024-05-15 02:34:37.396263] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:50.165 [2024-05-15 02:34:37.396271] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:50.165 [2024-05-15 02:34:37.396277] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:50.165 [2024-05-15 02:34:37.396285] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:50.165 [2024-05-15 02:34:37.396292] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:50.165 [2024-05-15 02:34:37.396311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.165 [2024-05-15 02:34:37.396327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.165 [2024-05-15 02:34:37.396342] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.165 [2024-05-15 02:34:37.396353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.165 [2024-05-15 02:34:37.396375] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.165 [2024-05-15 02:34:37.396536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.165 [2024-05-15 02:34:37.396548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.165 [2024-05-15 02:34:37.396555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8abe40) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.396572] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396580] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396586] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.396596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.166 [2024-05-15 02:34:37.396606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396613] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396619] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.396628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.166 [2024-05-15 02:34:37.396638] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.396660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.166 [2024-05-15 02:34:37.396669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.396691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.166 [2024-05-15 02:34:37.396700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.396718] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.396734] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.396757] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.396768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.396789] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abe40, cid 0, qid 0 00:17:50.166 [2024-05-15 02:34:37.396800] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8abfa0, cid 1, qid 0 00:17:50.166 [2024-05-15 02:34:37.396823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac100, cid 2, qid 0 00:17:50.166 [2024-05-15 02:34:37.396831] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac260, cid 3, qid 0 00:17:50.166 [2024-05-15 02:34:37.396838] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.166 [2024-05-15 02:34:37.397036] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.397052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.397059] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397066] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.397074] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:50.166 [2024-05-15 02:34:37.397083] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.397097] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.397113] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.397125] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397133] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397139] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.397150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.166 [2024-05-15 02:34:37.397171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.166 [2024-05-15 02:34:37.397343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.397359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.397366] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.397430] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.397449] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.397464] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397472] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.397482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.397504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.166 [2024-05-15 02:34:37.397677] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.166 [2024-05-15 02:34:37.397690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.166 [2024-05-15 02:34:37.397697] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397703] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=4096, cccid=4 00:17:50.166 [2024-05-15 02:34:37.397711] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac3c0) on tqpair(0x84cc80): expected_datao=0, payload_size=4096 00:17:50.166 [2024-05-15 02:34:37.397719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397761] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.397770] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.441956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.441974] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.441982] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.441989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.442015] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:50.166 [2024-05-15 02:34:37.442034] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.442052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.442082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.442091] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.442102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.442126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.166 [2024-05-15 02:34:37.442331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.166 [2024-05-15 02:34:37.442347] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.166 [2024-05-15 02:34:37.442354] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.442360] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=4096, cccid=4 00:17:50.166 [2024-05-15 02:34:37.442368] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac3c0) on tqpair(0x84cc80): expected_datao=0, payload_size=4096 00:17:50.166 [2024-05-15 02:34:37.442375] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.442412] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.442422] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.486944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.486963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.486972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.486980] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.486998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487017] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487032] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.487056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.487081] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.166 [2024-05-15 02:34:37.487246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.166 [2024-05-15 02:34:37.487262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.166 [2024-05-15 02:34:37.487269] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487275] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=4096, cccid=4 00:17:50.166 [2024-05-15 02:34:37.487282] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac3c0) on tqpair(0x84cc80): expected_datao=0, payload_size=4096 00:17:50.166 [2024-05-15 02:34:37.487290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487336] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487345] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487462] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.487473] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.487480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.487519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487535] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487550] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487578] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:50.166 [2024-05-15 02:34:37.487586] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:50.166 [2024-05-15 02:34:37.487594] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:50.166 [2024-05-15 02:34:37.487617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487627] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.487638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.487650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.487673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.166 [2024-05-15 02:34:37.487699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.166 [2024-05-15 02:34:37.487711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac520, cid 5, qid 0 00:17:50.166 [2024-05-15 02:34:37.487893] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.487909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.487921] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.487968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.487980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.487987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.487993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac520) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.488010] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.488020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.488030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.488052] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac520, cid 5, qid 0 00:17:50.166 [2024-05-15 02:34:37.488227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.488245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.488252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.488258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac520) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.488274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.488283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.488293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.488314] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac520, cid 5, qid 0 00:17:50.166 [2024-05-15 02:34:37.488488] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.166 [2024-05-15 02:34:37.488503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.166 [2024-05-15 02:34:37.488510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.488517] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac520) on tqpair=0x84cc80 00:17:50.166 [2024-05-15 02:34:37.488533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.166 [2024-05-15 02:34:37.488542] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84cc80) 00:17:50.166 [2024-05-15 02:34:37.488552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.166 [2024-05-15 02:34:37.488572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac520, cid 5, qid 0 00:17:50.167 [2024-05-15 02:34:37.488722] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.167 [2024-05-15 02:34:37.488734] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.167 [2024-05-15 02:34:37.488741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.488747] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac520) on tqpair=0x84cc80 00:17:50.167 [2024-05-15 02:34:37.488766] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.488776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84cc80) 00:17:50.167 [2024-05-15 02:34:37.488787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.167 [2024-05-15 02:34:37.488799] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.488806] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84cc80) 00:17:50.167 [2024-05-15 02:34:37.488821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.167 [2024-05-15 02:34:37.488834] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.488842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x84cc80) 00:17:50.167 [2024-05-15 02:34:37.488851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.167 [2024-05-15 02:34:37.488867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.488876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x84cc80) 00:17:50.167 [2024-05-15 02:34:37.488885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.167 [2024-05-15 02:34:37.488907] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac520, cid 5, qid 0 00:17:50.167 [2024-05-15 02:34:37.488918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac3c0, cid 4, qid 0 00:17:50.167 [2024-05-15 02:34:37.488925] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac680, cid 6, qid 0 00:17:50.167 [2024-05-15 02:34:37.488944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac7e0, cid 7, qid 0 00:17:50.167 [2024-05-15 02:34:37.489275] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.167 [2024-05-15 02:34:37.489291] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.167 [2024-05-15 02:34:37.489298] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489305] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=8192, cccid=5 00:17:50.167 [2024-05-15 02:34:37.489312] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac520) on tqpair(0x84cc80): expected_datao=0, payload_size=8192 00:17:50.167 [2024-05-15 02:34:37.489320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489330] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489338] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.167 [2024-05-15 02:34:37.489355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.167 [2024-05-15 02:34:37.489361] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489368] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=512, cccid=4 00:17:50.167 [2024-05-15 02:34:37.489376] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac3c0) on tqpair(0x84cc80): expected_datao=0, payload_size=512 00:17:50.167 [2024-05-15 02:34:37.489384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489394] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489402] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.167 [2024-05-15 02:34:37.489421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.167 [2024-05-15 02:34:37.489427] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489434] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=512, cccid=6 00:17:50.167 [2024-05-15 02:34:37.489442] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac680) on tqpair(0x84cc80): expected_datao=0, payload_size=512 00:17:50.167 [2024-05-15 02:34:37.489450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489459] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489466] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489479] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.167 [2024-05-15 02:34:37.489490] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.167 [2024-05-15 02:34:37.489498] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489505] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84cc80): datao=0, datal=4096, cccid=7 00:17:50.167 [2024-05-15 02:34:37.489513] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ac7e0) on tqpair(0x84cc80): expected_datao=0, payload_size=4096 00:17:50.167 [2024-05-15 02:34:37.489521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489531] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489539] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489551] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.167 [2024-05-15 02:34:37.489560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.167 [2024-05-15 02:34:37.489567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac520) on tqpair=0x84cc80 00:17:50.167 [2024-05-15 02:34:37.489593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.167 [2024-05-15 02:34:37.489605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.167 [2024-05-15 02:34:37.489611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac3c0) on tqpair=0x84cc80 00:17:50.167 [2024-05-15 02:34:37.489632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.167 [2024-05-15 02:34:37.489642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.167 [2024-05-15 02:34:37.489649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac680) on tqpair=0x84cc80 00:17:50.167 [2024-05-15 02:34:37.489686] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.167 [2024-05-15 02:34:37.489697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.167 [2024-05-15 02:34:37.489703] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.167 [2024-05-15 02:34:37.489709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac7e0) on tqpair=0x84cc80 00:17:50.167 ===================================================== 00:17:50.167 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.167 ===================================================== 00:17:50.167 Controller Capabilities/Features 00:17:50.167 ================================ 00:17:50.167 Vendor ID: 8086 00:17:50.167 Subsystem Vendor ID: 8086 00:17:50.167 Serial Number: SPDK00000000000001 00:17:50.167 Model Number: SPDK bdev Controller 00:17:50.167 Firmware Version: 24.05 00:17:50.167 Recommended Arb Burst: 6 00:17:50.167 IEEE OUI Identifier: e4 d2 5c 00:17:50.167 Multi-path I/O 00:17:50.167 May have multiple subsystem ports: Yes 00:17:50.167 May have multiple controllers: Yes 00:17:50.167 Associated with SR-IOV VF: No 00:17:50.167 Max Data Transfer Size: 131072 00:17:50.167 Max Number of Namespaces: 32 00:17:50.167 Max Number of I/O Queues: 127 00:17:50.167 NVMe Specification Version (VS): 1.3 00:17:50.167 NVMe Specification Version (Identify): 1.3 00:17:50.167 Maximum Queue Entries: 128 00:17:50.167 Contiguous Queues Required: Yes 00:17:50.167 Arbitration Mechanisms Supported 00:17:50.167 Weighted Round Robin: Not Supported 00:17:50.167 Vendor Specific: Not Supported 00:17:50.167 Reset Timeout: 15000 ms 00:17:50.167 Doorbell Stride: 4 bytes 00:17:50.167 NVM Subsystem Reset: Not Supported 00:17:50.167 Command Sets Supported 00:17:50.167 NVM Command Set: Supported 00:17:50.167 Boot Partition: Not Supported 00:17:50.167 Memory Page Size Minimum: 4096 bytes 00:17:50.167 Memory Page Size Maximum: 4096 bytes 00:17:50.167 Persistent Memory Region: Not Supported 00:17:50.167 Optional Asynchronous Events Supported 00:17:50.167 Namespace Attribute Notices: Supported 00:17:50.167 Firmware Activation Notices: Not Supported 00:17:50.167 ANA Change Notices: Not Supported 00:17:50.167 PLE Aggregate Log Change Notices: Not Supported 00:17:50.167 LBA Status Info Alert Notices: Not Supported 00:17:50.167 EGE Aggregate Log Change Notices: Not Supported 00:17:50.167 Normal NVM Subsystem Shutdown event: Not Supported 00:17:50.167 Zone Descriptor Change Notices: Not Supported 00:17:50.167 Discovery Log Change Notices: Not Supported 00:17:50.167 Controller Attributes 00:17:50.167 128-bit Host Identifier: Supported 00:17:50.167 Non-Operational Permissive Mode: Not Supported 00:17:50.167 NVM Sets: Not Supported 00:17:50.167 Read Recovery Levels: Not Supported 00:17:50.167 Endurance Groups: Not Supported 00:17:50.167 Predictable Latency Mode: Not Supported 00:17:50.167 Traffic Based Keep ALive: Not Supported 00:17:50.167 Namespace Granularity: Not Supported 00:17:50.167 SQ Associations: Not Supported 00:17:50.167 UUID List: Not Supported 00:17:50.167 Multi-Domain Subsystem: Not Supported 00:17:50.167 Fixed Capacity Management: Not Supported 00:17:50.167 Variable Capacity Management: Not Supported 00:17:50.167 Delete Endurance Group: Not Supported 00:17:50.167 Delete NVM Set: Not Supported 00:17:50.167 Extended LBA Formats Supported: Not Supported 00:17:50.167 Flexible Data Placement Supported: Not Supported 00:17:50.167 00:17:50.167 Controller Memory Buffer Support 00:17:50.167 ================================ 00:17:50.167 Supported: No 00:17:50.167 00:17:50.167 Persistent Memory Region Support 00:17:50.167 ================================ 00:17:50.167 Supported: No 00:17:50.167 00:17:50.167 Admin Command Set Attributes 00:17:50.167 ============================ 00:17:50.167 Security Send/Receive: Not Supported 00:17:50.167 Format NVM: Not Supported 00:17:50.167 Firmware Activate/Download: Not Supported 00:17:50.167 Namespace Management: Not Supported 00:17:50.167 Device Self-Test: Not Supported 00:17:50.167 Directives: Not Supported 00:17:50.167 NVMe-MI: Not Supported 00:17:50.167 Virtualization Management: Not Supported 00:17:50.167 Doorbell Buffer Config: Not Supported 00:17:50.167 Get LBA Status Capability: Not Supported 00:17:50.167 Command & Feature Lockdown Capability: Not Supported 00:17:50.167 Abort Command Limit: 4 00:17:50.167 Async Event Request Limit: 4 00:17:50.167 Number of Firmware Slots: N/A 00:17:50.167 Firmware Slot 1 Read-Only: N/A 00:17:50.167 Firmware Activation Without Reset: N/A 00:17:50.167 Multiple Update Detection Support: N/A 00:17:50.167 Firmware Update Granularity: No Information Provided 00:17:50.167 Per-Namespace SMART Log: No 00:17:50.167 Asymmetric Namespace Access Log Page: Not Supported 00:17:50.167 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:50.167 Command Effects Log Page: Supported 00:17:50.167 Get Log Page Extended Data: Supported 00:17:50.167 Telemetry Log Pages: Not Supported 00:17:50.167 Persistent Event Log Pages: Not Supported 00:17:50.167 Supported Log Pages Log Page: May Support 00:17:50.167 Commands Supported & Effects Log Page: Not Supported 00:17:50.167 Feature Identifiers & Effects Log Page:May Support 00:17:50.167 NVMe-MI Commands & Effects Log Page: May Support 00:17:50.167 Data Area 4 for Telemetry Log: Not Supported 00:17:50.167 Error Log Page Entries Supported: 128 00:17:50.167 Keep Alive: Supported 00:17:50.167 Keep Alive Granularity: 10000 ms 00:17:50.167 00:17:50.167 NVM Command Set Attributes 00:17:50.167 ========================== 00:17:50.167 Submission Queue Entry Size 00:17:50.167 Max: 64 00:17:50.167 Min: 64 00:17:50.167 Completion Queue Entry Size 00:17:50.167 Max: 16 00:17:50.167 Min: 16 00:17:50.168 Number of Namespaces: 32 00:17:50.168 Compare Command: Supported 00:17:50.168 Write Uncorrectable Command: Not Supported 00:17:50.168 Dataset Management Command: Supported 00:17:50.168 Write Zeroes Command: Supported 00:17:50.168 Set Features Save Field: Not Supported 00:17:50.168 Reservations: Supported 00:17:50.168 Timestamp: Not Supported 00:17:50.168 Copy: Supported 00:17:50.168 Volatile Write Cache: Present 00:17:50.168 Atomic Write Unit (Normal): 1 00:17:50.168 Atomic Write Unit (PFail): 1 00:17:50.168 Atomic Compare & Write Unit: 1 00:17:50.168 Fused Compare & Write: Supported 00:17:50.168 Scatter-Gather List 00:17:50.168 SGL Command Set: Supported 00:17:50.168 SGL Keyed: Supported 00:17:50.168 SGL Bit Bucket Descriptor: Not Supported 00:17:50.168 SGL Metadata Pointer: Not Supported 00:17:50.168 Oversized SGL: Not Supported 00:17:50.168 SGL Metadata Address: Not Supported 00:17:50.168 SGL Offset: Supported 00:17:50.168 Transport SGL Data Block: Not Supported 00:17:50.168 Replay Protected Memory Block: Not Supported 00:17:50.168 00:17:50.168 Firmware Slot Information 00:17:50.168 ========================= 00:17:50.168 Active slot: 1 00:17:50.168 Slot 1 Firmware Revision: 24.05 00:17:50.168 00:17:50.168 00:17:50.168 Commands Supported and Effects 00:17:50.168 ============================== 00:17:50.168 Admin Commands 00:17:50.168 -------------- 00:17:50.168 Get Log Page (02h): Supported 00:17:50.168 Identify (06h): Supported 00:17:50.168 Abort (08h): Supported 00:17:50.168 Set Features (09h): Supported 00:17:50.168 Get Features (0Ah): Supported 00:17:50.168 Asynchronous Event Request (0Ch): Supported 00:17:50.168 Keep Alive (18h): Supported 00:17:50.168 I/O Commands 00:17:50.168 ------------ 00:17:50.168 Flush (00h): Supported LBA-Change 00:17:50.168 Write (01h): Supported LBA-Change 00:17:50.168 Read (02h): Supported 00:17:50.168 Compare (05h): Supported 00:17:50.168 Write Zeroes (08h): Supported LBA-Change 00:17:50.168 Dataset Management (09h): Supported LBA-Change 00:17:50.168 Copy (19h): Supported LBA-Change 00:17:50.168 Unknown (79h): Supported LBA-Change 00:17:50.168 Unknown (7Ah): Supported 00:17:50.168 00:17:50.168 Error Log 00:17:50.168 ========= 00:17:50.168 00:17:50.168 Arbitration 00:17:50.168 =========== 00:17:50.168 Arbitration Burst: 1 00:17:50.168 00:17:50.168 Power Management 00:17:50.168 ================ 00:17:50.168 Number of Power States: 1 00:17:50.168 Current Power State: Power State #0 00:17:50.168 Power State #0: 00:17:50.168 Max Power: 0.00 W 00:17:50.168 Non-Operational State: Operational 00:17:50.168 Entry Latency: Not Reported 00:17:50.168 Exit Latency: Not Reported 00:17:50.168 Relative Read Throughput: 0 00:17:50.168 Relative Read Latency: 0 00:17:50.168 Relative Write Throughput: 0 00:17:50.168 Relative Write Latency: 0 00:17:50.168 Idle Power: Not Reported 00:17:50.168 Active Power: Not Reported 00:17:50.168 Non-Operational Permissive Mode: Not Supported 00:17:50.168 00:17:50.168 Health Information 00:17:50.168 ================== 00:17:50.168 Critical Warnings: 00:17:50.168 Available Spare Space: OK 00:17:50.168 Temperature: OK 00:17:50.168 Device Reliability: OK 00:17:50.168 Read Only: No 00:17:50.168 Volatile Memory Backup: OK 00:17:50.168 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:50.168 Temperature Threshold: [2024-05-15 02:34:37.489847] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.489861] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x84cc80) 00:17:50.168 [2024-05-15 02:34:37.489873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.168 [2024-05-15 02:34:37.489897] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac7e0, cid 7, qid 0 00:17:50.168 [2024-05-15 02:34:37.490087] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.168 [2024-05-15 02:34:37.490103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.168 [2024-05-15 02:34:37.490110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490117] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac7e0) on tqpair=0x84cc80 00:17:50.168 [2024-05-15 02:34:37.490159] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:50.168 [2024-05-15 02:34:37.490181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.168 [2024-05-15 02:34:37.490193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.168 [2024-05-15 02:34:37.490203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.168 [2024-05-15 02:34:37.490212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.168 [2024-05-15 02:34:37.490231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84cc80) 00:17:50.168 [2024-05-15 02:34:37.490257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.168 [2024-05-15 02:34:37.490280] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac260, cid 3, qid 0 00:17:50.168 [2024-05-15 02:34:37.490434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.168 [2024-05-15 02:34:37.490449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.168 [2024-05-15 02:34:37.490456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac260) on tqpair=0x84cc80 00:17:50.168 [2024-05-15 02:34:37.490474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490482] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84cc80) 00:17:50.168 [2024-05-15 02:34:37.490499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.168 [2024-05-15 02:34:37.490524] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac260, cid 3, qid 0 00:17:50.168 [2024-05-15 02:34:37.490704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.168 [2024-05-15 02:34:37.490719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.168 [2024-05-15 02:34:37.490727] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac260) on tqpair=0x84cc80 00:17:50.168 [2024-05-15 02:34:37.490742] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:50.168 [2024-05-15 02:34:37.490750] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:50.168 [2024-05-15 02:34:37.490766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490776] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.490783] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84cc80) 00:17:50.168 [2024-05-15 02:34:37.490793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.168 [2024-05-15 02:34:37.490814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac260, cid 3, qid 0 00:17:50.168 [2024-05-15 02:34:37.494945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.168 [2024-05-15 02:34:37.494963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.168 [2024-05-15 02:34:37.494970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.494976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac260) on tqpair=0x84cc80 00:17:50.168 [2024-05-15 02:34:37.494994] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.495004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.495010] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84cc80) 00:17:50.168 [2024-05-15 02:34:37.495021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.168 [2024-05-15 02:34:37.495043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ac260, cid 3, qid 0 00:17:50.168 [2024-05-15 02:34:37.495206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.168 [2024-05-15 02:34:37.495222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.168 [2024-05-15 02:34:37.495230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.168 [2024-05-15 02:34:37.495237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8ac260) on tqpair=0x84cc80 00:17:50.168 [2024-05-15 02:34:37.495250] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:17:50.168 0 Kelvin (-273 Celsius) 00:17:50.168 Available Spare: 0% 00:17:50.168 Available Spare Threshold: 0% 00:17:50.168 Life Percentage Used: 0% 00:17:50.168 Data Units Read: 0 00:17:50.168 Data Units Written: 0 00:17:50.168 Host Read Commands: 0 00:17:50.168 Host Write Commands: 0 00:17:50.168 Controller Busy Time: 0 minutes 00:17:50.168 Power Cycles: 0 00:17:50.168 Power On Hours: 0 hours 00:17:50.169 Unsafe Shutdowns: 0 00:17:50.169 Unrecoverable Media Errors: 0 00:17:50.169 Lifetime Error Log Entries: 0 00:17:50.169 Warning Temperature Time: 0 minutes 00:17:50.169 Critical Temperature Time: 0 minutes 00:17:50.169 00:17:50.169 Number of Queues 00:17:50.169 ================ 00:17:50.169 Number of I/O Submission Queues: 127 00:17:50.169 Number of I/O Completion Queues: 127 00:17:50.169 00:17:50.169 Active Namespaces 00:17:50.169 ================= 00:17:50.169 Namespace ID:1 00:17:50.169 Error Recovery Timeout: Unlimited 00:17:50.169 Command Set Identifier: NVM (00h) 00:17:50.169 Deallocate: Supported 00:17:50.169 Deallocated/Unwritten Error: Not Supported 00:17:50.169 Deallocated Read Value: Unknown 00:17:50.169 Deallocate in Write Zeroes: Not Supported 00:17:50.169 Deallocated Guard Field: 0xFFFF 00:17:50.169 Flush: Supported 00:17:50.169 Reservation: Supported 00:17:50.169 Namespace Sharing Capabilities: Multiple Controllers 00:17:50.169 Size (in LBAs): 131072 (0GiB) 00:17:50.169 Capacity (in LBAs): 131072 (0GiB) 00:17:50.169 Utilization (in LBAs): 131072 (0GiB) 00:17:50.169 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:50.169 EUI64: ABCDEF0123456789 00:17:50.169 UUID: 14b94394-fcc4-4f00-85eb-23021ece7873 00:17:50.169 Thin Provisioning: Not Supported 00:17:50.169 Per-NS Atomic Units: Yes 00:17:50.169 Atomic Boundary Size (Normal): 0 00:17:50.169 Atomic Boundary Size (PFail): 0 00:17:50.169 Atomic Boundary Offset: 0 00:17:50.169 Maximum Single Source Range Length: 65535 00:17:50.169 Maximum Copy Length: 65535 00:17:50.169 Maximum Source Range Count: 1 00:17:50.169 NGUID/EUI64 Never Reused: No 00:17:50.169 Namespace Write Protected: No 00:17:50.169 Number of LBA Formats: 1 00:17:50.169 Current LBA Format: LBA Format #00 00:17:50.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:50.169 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.169 rmmod nvme_tcp 00:17:50.169 rmmod nvme_fabrics 00:17:50.169 rmmod nvme_keyring 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2347590 ']' 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2347590 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 2347590 ']' 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 2347590 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.169 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2347590 00:17:50.426 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:50.426 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:50.426 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2347590' 00:17:50.426 killing process with pid 2347590 00:17:50.426 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 2347590 00:17:50.426 [2024-05-15 02:34:37.588597] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:50.426 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 2347590 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.685 02:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.585 02:34:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.585 00:17:52.585 real 0m6.497s 00:17:52.585 user 0m7.421s 00:17:52.585 sys 0m2.151s 00:17:52.585 02:34:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:52.585 02:34:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.585 ************************************ 00:17:52.585 END TEST nvmf_identify 00:17:52.585 ************************************ 00:17:52.585 02:34:39 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.585 02:34:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:52.585 02:34:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:52.585 02:34:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.585 ************************************ 00:17:52.585 START TEST nvmf_perf 00:17:52.585 ************************************ 00:17:52.585 02:34:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.843 * Looking for test storage... 00:17:52.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:52.843 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.844 02:34:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.375 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:17:55.376 00:17:55.376 --- 10.0.0.2 ping statistics --- 00:17:55.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.376 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:17:55.376 00:17:55.376 --- 10.0.0.1 ping statistics --- 00:17:55.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.376 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2350093 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2350093 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 2350093 ']' 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:55.376 02:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:55.376 [2024-05-15 02:34:42.742892] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:17:55.376 [2024-05-15 02:34:42.742987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.376 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.633 [2024-05-15 02:34:42.820342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.633 [2024-05-15 02:34:42.926890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.633 [2024-05-15 02:34:42.926971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.633 [2024-05-15 02:34:42.926985] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.633 [2024-05-15 02:34:42.927010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.633 [2024-05-15 02:34:42.927020] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.633 [2024-05-15 02:34:42.927074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.633 [2024-05-15 02:34:42.927136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.633 [2024-05-15 02:34:42.927202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.633 [2024-05-15 02:34:42.927205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:55.891 02:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:59.167 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:59.167 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:59.167 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:59.167 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.425 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:59.425 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:59.425 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:59.425 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:59.425 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.683 [2024-05-15 02:34:46.932125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.683 02:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.940 02:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:59.940 02:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.197 02:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.197 02:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:00.455 02:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.712 [2024-05-15 02:34:47.907502] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:00.712 [2024-05-15 02:34:47.907810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.712 02:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.970 02:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:18:00.970 02:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:18:00.970 02:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:00.970 02:34:48 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:18:02.343 Initializing NVMe Controllers 00:18:02.343 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:18:02.343 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:18:02.343 Initialization complete. Launching workers. 00:18:02.343 ======================================================== 00:18:02.343 Latency(us) 00:18:02.343 Device Information : IOPS MiB/s Average min max 00:18:02.343 PCIE (0000:88:00.0) NSID 1 from core 0: 84082.30 328.45 380.04 27.03 7272.89 00:18:02.343 ======================================================== 00:18:02.343 Total : 84082.30 328.45 380.04 27.03 7272.89 00:18:02.343 00:18:02.343 02:34:49 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:02.343 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.716 Initializing NVMe Controllers 00:18:03.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:03.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:03.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:03.716 Initialization complete. Launching workers. 00:18:03.716 ======================================================== 00:18:03.716 Latency(us) 00:18:03.716 Device Information : IOPS MiB/s Average min max 00:18:03.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.00 0.30 13294.71 284.20 45736.80 00:18:03.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15823.76 7949.53 47901.94 00:18:03.716 ======================================================== 00:18:03.716 Total : 143.00 0.56 14461.97 284.20 47901.94 00:18:03.716 00:18:03.716 02:34:50 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:03.716 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.694 Initializing NVMe Controllers 00:18:04.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.694 Initialization complete. Launching workers. 00:18:04.694 ======================================================== 00:18:04.694 Latency(us) 00:18:04.694 Device Information : IOPS MiB/s Average min max 00:18:04.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7750.68 30.28 4137.28 969.62 11476.17 00:18:04.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2786.53 10.88 11519.57 5682.04 27118.85 00:18:04.694 ======================================================== 00:18:04.694 Total : 10537.21 41.16 6089.50 969.62 27118.85 00:18:04.694 00:18:04.951 02:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:18:04.951 02:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:18:04.951 02:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:04.951 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.479 Initializing NVMe Controllers 00:18:07.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.479 Controller IO queue size 128, less than required. 00:18:07.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:07.479 Controller IO queue size 128, less than required. 00:18:07.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:07.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:07.479 Initialization complete. Launching workers. 00:18:07.479 ======================================================== 00:18:07.479 Latency(us) 00:18:07.479 Device Information : IOPS MiB/s Average min max 00:18:07.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 819.50 204.87 161273.50 77269.81 247004.78 00:18:07.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.00 151.75 218441.86 78111.86 339057.92 00:18:07.479 ======================================================== 00:18:07.479 Total : 1426.50 356.62 185599.61 77269.81 339057.92 00:18:07.479 00:18:07.479 02:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:07.479 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.479 No valid NVMe controllers or AIO or URING devices found 00:18:07.479 Initializing NVMe Controllers 00:18:07.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.479 Controller IO queue size 128, less than required. 00:18:07.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:07.479 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:07.479 Controller IO queue size 128, less than required. 00:18:07.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:07.479 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:07.479 WARNING: Some requested NVMe devices were skipped 00:18:07.740 02:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:07.740 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.265 Initializing NVMe Controllers 00:18:10.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.265 Controller IO queue size 128, less than required. 00:18:10.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:10.265 Controller IO queue size 128, less than required. 00:18:10.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:10.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:10.265 Initialization complete. Launching workers. 00:18:10.265 00:18:10.265 ==================== 00:18:10.265 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:10.265 TCP transport: 00:18:10.265 polls: 41771 00:18:10.265 idle_polls: 12959 00:18:10.265 sock_completions: 28812 00:18:10.265 nvme_completions: 3387 00:18:10.265 submitted_requests: 5104 00:18:10.265 queued_requests: 1 00:18:10.265 00:18:10.265 ==================== 00:18:10.265 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:10.265 TCP transport: 00:18:10.265 polls: 45055 00:18:10.265 idle_polls: 12805 00:18:10.265 sock_completions: 32250 00:18:10.265 nvme_completions: 2891 00:18:10.265 submitted_requests: 4268 00:18:10.265 queued_requests: 1 00:18:10.265 ======================================================== 00:18:10.265 Latency(us) 00:18:10.265 Device Information : IOPS MiB/s Average min max 00:18:10.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 846.43 211.61 157628.69 79933.64 249240.64 00:18:10.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 722.44 180.61 184334.37 79325.25 253597.41 00:18:10.265 ======================================================== 00:18:10.265 Total : 1568.86 392.22 169926.24 79325.25 253597.41 00:18:10.265 00:18:10.265 02:34:57 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:10.265 02:34:57 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.523 rmmod nvme_tcp 00:18:10.523 rmmod nvme_fabrics 00:18:10.523 rmmod nvme_keyring 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2350093 ']' 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2350093 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 2350093 ']' 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 2350093 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2350093 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2350093' 00:18:10.523 killing process with pid 2350093 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 2350093 00:18:10.523 [2024-05-15 02:34:57.796257] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:10.523 02:34:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 2350093 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.420 02:34:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.321 02:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.321 00:18:14.321 real 0m21.524s 00:18:14.321 user 1m4.602s 00:18:14.321 sys 0m5.422s 00:18:14.321 02:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:14.321 02:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:14.321 ************************************ 00:18:14.321 END TEST nvmf_perf 00:18:14.321 ************************************ 00:18:14.321 02:35:01 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:14.321 02:35:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:14.321 02:35:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:14.321 02:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.321 ************************************ 00:18:14.321 START TEST nvmf_fio_host 00:18:14.321 ************************************ 00:18:14.321 02:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:14.321 * Looking for test storage... 00:18:14.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.322 02:35:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.851 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:18:16.852 00:18:16.852 --- 10.0.0.2 ping statistics --- 00:18:16.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.852 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:16.852 00:18:16.852 --- 10.0.0.1 ping statistics --- 00:18:16.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.852 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2354346 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2354346 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 2354346 ']' 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:16.852 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.852 [2024-05-15 02:35:04.201811] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:18:16.852 [2024-05-15 02:35:04.201894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.852 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.110 [2024-05-15 02:35:04.276755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.110 [2024-05-15 02:35:04.383414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.110 [2024-05-15 02:35:04.383477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.110 [2024-05-15 02:35:04.383490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.110 [2024-05-15 02:35:04.383501] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.110 [2024-05-15 02:35:04.383524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.110 [2024-05-15 02:35:04.383617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.110 [2024-05-15 02:35:04.383683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.110 [2024-05-15 02:35:04.383750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.110 [2024-05-15 02:35:04.383752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.110 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:17.110 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:18:17.110 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.110 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.110 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.110 [2024-05-15 02:35:04.519728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.368 Malloc1 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.368 [2024-05-15 02:35:04.601263] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:17.368 [2024-05-15 02:35:04.601579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:17.368 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:17.369 02:35:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:17.626 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:17.626 fio-3.35 00:18:17.626 Starting 1 thread 00:18:17.626 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.154 00:18:20.154 test: (groupid=0, jobs=1): err= 0: pid=2354562: Wed May 15 02:35:07 2024 00:18:20.154 read: IOPS=9089, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec) 00:18:20.154 slat (nsec): min=1862, max=178226, avg=2643.45, stdev=2081.71 00:18:20.154 clat (usec): min=3453, max=13847, avg=7795.07, stdev=581.09 00:18:20.154 lat (usec): min=3479, max=13849, avg=7797.72, stdev=580.96 00:18:20.154 clat percentiles (usec): 00:18:20.154 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:18:20.154 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7767], 60.00th=[ 7963], 00:18:20.154 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:18:20.154 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11994], 99.95th=[13173], 00:18:20.154 | 99.99th=[13829] 00:18:20.154 bw ( KiB/s): min=35560, max=36880, per=99.90%, avg=36322.00, stdev=560.24, samples=4 00:18:20.154 iops : min= 8890, max= 9220, avg=9080.50, stdev=140.06, samples=4 00:18:20.154 write: IOPS=9101, BW=35.6MiB/s (37.3MB/s)(71.3MiB/2006msec); 0 zone resets 00:18:20.154 slat (usec): min=2, max=135, avg= 2.81, stdev= 1.40 00:18:20.154 clat (usec): min=1537, max=11239, avg=6244.08, stdev=499.92 00:18:20.154 lat (usec): min=1546, max=11241, avg=6246.89, stdev=499.86 00:18:20.154 clat percentiles (usec): 00:18:20.154 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:18:20.154 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:18:20.154 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:18:20.154 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9241], 99.95th=[ 9765], 00:18:20.154 | 99.99th=[11207] 00:18:20.154 bw ( KiB/s): min=36224, max=36608, per=100.00%, avg=36406.00, stdev=158.04, samples=4 00:18:20.154 iops : min= 9056, max= 9152, avg=9101.50, stdev=39.51, samples=4 00:18:20.154 lat (msec) : 2=0.01%, 4=0.10%, 10=99.81%, 20=0.09% 00:18:20.154 cpu : usr=54.36%, sys=35.86%, ctx=75, majf=0, minf=5 00:18:20.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:20.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.154 issued rwts: total=18234,18257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.154 00:18:20.154 Run status group 0 (all jobs): 00:18:20.154 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.7MB), run=2006-2006msec 00:18:20.154 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.3MiB (74.8MB), run=2006-2006msec 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:20.154 02:35:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:20.154 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:20.154 fio-3.35 00:18:20.154 Starting 1 thread 00:18:20.154 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.678 00:18:22.678 test: (groupid=0, jobs=1): err= 0: pid=2354895: Wed May 15 02:35:09 2024 00:18:22.678 read: IOPS=7405, BW=116MiB/s (121MB/s)(232MiB/2008msec) 00:18:22.678 slat (nsec): min=2911, max=96279, avg=3688.08, stdev=1741.37 00:18:22.678 clat (usec): min=2378, max=22098, avg=10374.74, stdev=2541.42 00:18:22.678 lat (usec): min=2382, max=22101, avg=10378.43, stdev=2541.51 00:18:22.678 clat percentiles (usec): 00:18:22.678 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 8225], 00:18:22.678 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10945], 00:18:22.678 | 70.00th=[11600], 80.00th=[12387], 90.00th=[13698], 95.00th=[14746], 00:18:22.678 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18482], 99.95th=[18744], 00:18:22.678 | 99.99th=[21365] 00:18:22.678 bw ( KiB/s): min=51904, max=70112, per=50.93%, avg=60344.00, stdev=8653.44, samples=4 00:18:22.678 iops : min= 3244, max= 4382, avg=3771.50, stdev=540.84, samples=4 00:18:22.678 write: IOPS=4357, BW=68.1MiB/s (71.4MB/s)(123MiB/1813msec); 0 zone resets 00:18:22.678 slat (usec): min=30, max=149, avg=33.35, stdev= 4.91 00:18:22.678 clat (usec): min=6042, max=20984, avg=12128.10, stdev=2195.04 00:18:22.678 lat (usec): min=6075, max=21017, avg=12161.45, stdev=2195.42 00:18:22.678 clat percentiles (usec): 00:18:22.678 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:18:22.678 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:18:22.678 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15008], 95.00th=[15926], 00:18:22.678 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19268], 99.95th=[19792], 00:18:22.678 | 99.99th=[21103] 00:18:22.678 bw ( KiB/s): min=54720, max=72000, per=90.11%, avg=62832.00, stdev=8617.79, samples=4 00:18:22.678 iops : min= 3420, max= 4500, avg=3927.00, stdev=538.61, samples=4 00:18:22.678 lat (msec) : 4=0.14%, 10=34.58%, 20=65.26%, 50=0.02% 00:18:22.678 cpu : usr=73.36%, sys=21.86%, ctx=29, majf=0, minf=1 00:18:22.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:22.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.678 issued rwts: total=14870,7901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.678 00:18:22.678 Run status group 0 (all jobs): 00:18:22.678 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=232MiB (244MB), run=2008-2008msec 00:18:22.678 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=123MiB (129MB), run=1813-1813msec 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.678 rmmod nvme_tcp 00:18:22.678 rmmod nvme_fabrics 00:18:22.678 rmmod nvme_keyring 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2354346 ']' 00:18:22.678 02:35:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2354346 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 2354346 ']' 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 2354346 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2354346 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2354346' 00:18:22.679 killing process with pid 2354346 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 2354346 00:18:22.679 [2024-05-15 02:35:09.885151] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:22.679 02:35:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 2354346 00:18:22.937 02:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.937 02:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.937 02:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.937 02:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.937 02:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.938 02:35:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.938 02:35:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.938 02:35:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.838 02:35:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:24.838 00:18:24.838 real 0m10.691s 00:18:24.838 user 0m26.772s 00:18:24.838 sys 0m4.080s 00:18:24.838 02:35:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:24.838 02:35:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 ************************************ 00:18:24.838 END TEST nvmf_fio_host 00:18:24.838 ************************************ 00:18:25.096 02:35:12 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:25.096 02:35:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:25.096 02:35:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:25.096 02:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.096 ************************************ 00:18:25.096 START TEST nvmf_failover 00:18:25.096 ************************************ 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:25.096 * Looking for test storage... 00:18:25.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.096 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.097 02:35:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.658 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.659 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.659 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:18:27.659 00:18:27.659 --- 10.0.0.2 ping statistics --- 00:18:27.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.659 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:27.659 00:18:27.659 --- 10.0.0.1 ping statistics --- 00:18:27.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.659 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2357376 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2357376 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2357376 ']' 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:27.659 02:35:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:27.659 [2024-05-15 02:35:14.864140] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:18:27.659 [2024-05-15 02:35:14.864236] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.659 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.659 [2024-05-15 02:35:14.942862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.659 [2024-05-15 02:35:15.052585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.659 [2024-05-15 02:35:15.052640] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.659 [2024-05-15 02:35:15.052668] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.659 [2024-05-15 02:35:15.052679] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.659 [2024-05-15 02:35:15.052689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.660 [2024-05-15 02:35:15.052775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.660 [2024-05-15 02:35:15.052839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.660 [2024-05-15 02:35:15.052842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.918 02:35:15 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:28.176 [2024-05-15 02:35:15.425885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.176 02:35:15 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:28.434 Malloc0 00:18:28.434 02:35:15 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.691 02:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.947 02:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.205 [2024-05-15 02:35:16.539034] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:29.205 [2024-05-15 02:35:16.539355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.205 02:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:29.461 [2024-05-15 02:35:16.783947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:29.461 02:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:29.719 [2024-05-15 02:35:17.076867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2357670 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2357670 /var/tmp/bdevperf.sock 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2357670 ']' 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.719 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:30.286 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.286 02:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:30.286 02:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.543 NVMe0n1 00:18:30.543 02:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.801 00:18:30.801 02:35:18 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2357806 00:18:30.801 02:35:18 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.801 02:35:18 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:31.734 02:35:19 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.993 [2024-05-15 02:35:19.364652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.364996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.993 [2024-05-15 02:35:19.365770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.994 [2024-05-15 02:35:19.365782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.994 [2024-05-15 02:35:19.365799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ebf0 is same with the state(5) to be set 00:18:31.994 02:35:19 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:35.275 02:35:22 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:35.533 00:18:35.533 02:35:22 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.798 [2024-05-15 02:35:23.011458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 [2024-05-15 02:35:23.011590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153f420 is same with the state(5) to be set 00:18:35.798 02:35:23 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:39.119 02:35:26 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.119 [2024-05-15 02:35:26.271304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.119 02:35:26 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:40.054 02:35:27 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:40.312 [2024-05-15 02:35:27.536866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.536923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.536948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.536961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.536978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.536990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.312 [2024-05-15 02:35:27.537112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 [2024-05-15 02:35:27.537969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4ef0 is same with the state(5) to be set 00:18:40.313 02:35:27 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2357806 00:18:46.871 0 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2357670 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2357670 ']' 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2357670 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2357670 00:18:46.871 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:46.872 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:46.872 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2357670' 00:18:46.872 killing process with pid 2357670 00:18:46.872 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2357670 00:18:46.872 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2357670 00:18:46.872 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:46.872 [2024-05-15 02:35:17.141857] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:18:46.872 [2024-05-15 02:35:17.141976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357670 ] 00:18:46.872 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.872 [2024-05-15 02:35:17.219275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.872 [2024-05-15 02:35:17.328845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.872 Running I/O for 15 seconds... 00:18:46.872 [2024-05-15 02:35:19.366381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.366977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.366991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.872 [2024-05-15 02:35:19.367500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.872 [2024-05-15 02:35:19.367518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.367982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.873 [2024-05-15 02:35:19.368738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.873 [2024-05-15 02:35:19.368753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.368976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.368991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.874 [2024-05-15 02:35:19.369066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.874 [2024-05-15 02:35:19.369952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.874 [2024-05-15 02:35:19.369967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:19.369981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.369996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:19.370009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:19.370215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.875 [2024-05-15 02:35:19.370269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.875 [2024-05-15 02:35:19.370283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66440 len:8 PRP1 0x0 PRP2 0x0 00:18:46.875 [2024-05-15 02:35:19.370296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370358] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2325170 was disconnected and freed. reset controller. 00:18:46.875 [2024-05-15 02:35:19.370384] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:46.875 [2024-05-15 02:35:19.370417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:19.370435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:19.370464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:19.370491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:19.370518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:19.370531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.875 [2024-05-15 02:35:19.374002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.875 [2024-05-15 02:35:19.374041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23062f0 (9): Bad file descriptor 00:18:46.875 [2024-05-15 02:35:19.413076] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.875 [2024-05-15 02:35:23.009971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:23.010045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.010063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:23.010077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.010101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:23.010115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.010129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.875 [2024-05-15 02:35:23.010142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.010155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23062f0 is same with the state(5) to be set 00:18:46.875 [2024-05-15 02:35:23.011925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.011972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.875 [2024-05-15 02:35:23.012202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.875 [2024-05-15 02:35:23.012543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.875 [2024-05-15 02:35:23.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.012981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.012995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.876 [2024-05-15 02:35:23.013418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.876 [2024-05-15 02:35:23.013538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.876 [2024-05-15 02:35:23.013553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.877 [2024-05-15 02:35:23.013567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.877 [2024-05-15 02:35:23.013596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.877 [2024-05-15 02:35:23.013626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.013983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.013996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.877 [2024-05-15 02:35:23.014768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.877 [2024-05-15 02:35:23.014783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.014812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.014842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.014871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.014900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.014936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.014972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.014986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.015015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.015045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.878 [2024-05-15 02:35:23.015073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73232 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73240 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73248 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73256 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73264 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73272 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73280 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73288 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73296 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72592 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72600 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72608 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72632 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72640 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.015941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72648 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.015956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.015980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.015991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.016002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72656 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.016015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.878 [2024-05-15 02:35:23.016029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.878 [2024-05-15 02:35:23.016039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.878 [2024-05-15 02:35:23.016051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72664 len:8 PRP1 0x0 PRP2 0x0 00:18:46.878 [2024-05-15 02:35:23.016063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.879 [2024-05-15 02:35:23.016087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.879 [2024-05-15 02:35:23.016099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72672 len:8 PRP1 0x0 PRP2 0x0 00:18:46.879 [2024-05-15 02:35:23.016117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.879 [2024-05-15 02:35:23.016142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.879 [2024-05-15 02:35:23.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72680 len:8 PRP1 0x0 PRP2 0x0 00:18:46.879 [2024-05-15 02:35:23.016169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.879 [2024-05-15 02:35:23.016194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.879 [2024-05-15 02:35:23.016205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72688 len:8 PRP1 0x0 PRP2 0x0 00:18:46.879 [2024-05-15 02:35:23.016218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.879 [2024-05-15 02:35:23.016249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.879 [2024-05-15 02:35:23.016260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72696 len:8 PRP1 0x0 PRP2 0x0 00:18:46.879 [2024-05-15 02:35:23.016273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.879 [2024-05-15 02:35:23.016297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.879 [2024-05-15 02:35:23.016309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72704 len:8 PRP1 0x0 PRP2 0x0 00:18:46.879 [2024-05-15 02:35:23.016321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.879 [2024-05-15 02:35:23.016345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.879 [2024-05-15 02:35:23.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72712 len:8 PRP1 0x0 PRP2 0x0 00:18:46.879 [2024-05-15 02:35:23.016369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:23.016434] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2327060 was disconnected and freed. reset controller. 00:18:46.879 [2024-05-15 02:35:23.016453] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:46.879 [2024-05-15 02:35:23.016468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.879 [2024-05-15 02:35:23.019798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.879 [2024-05-15 02:35:23.019839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23062f0 (9): Bad file descriptor 00:18:46.879 [2024-05-15 02:35:23.061208] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.879 [2024-05-15 02:35:27.536133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.879 [2024-05-15 02:35:27.536194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.536212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.879 [2024-05-15 02:35:27.536227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.536241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.879 [2024-05-15 02:35:27.536255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.536277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.879 [2024-05-15 02:35:27.536292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.536305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23062f0 is same with the state(5) to be set 00:18:46.879 [2024-05-15 02:35:27.538711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.538978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.538994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.879 [2024-05-15 02:35:27.539444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.879 [2024-05-15 02:35:27.539461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.880 [2024-05-15 02:35:27.539854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.539883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.539925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.539963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.539978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.880 [2024-05-15 02:35:27.540312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.880 [2024-05-15 02:35:27.540327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.881 [2024-05-15 02:35:27.540926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.540980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.540994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.881 [2024-05-15 02:35:27.541525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.881 [2024-05-15 02:35:27.541540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.882 [2024-05-15 02:35:27.541792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.541842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130944 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.541855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.541886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.541897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130952 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.541910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.541943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.541955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130960 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.541968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.541981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.541992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130968 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130976 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130984 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130992 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131000 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131008 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131016 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131024 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131032 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131040 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131048 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131056 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131064 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16 len:8 PRP1 0x0 PRP2 0x0 00:18:46.882 [2024-05-15 02:35:27.542736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.882 [2024-05-15 02:35:27.542749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.882 [2024-05-15 02:35:27.542760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.882 [2024-05-15 02:35:27.542771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24 len:8 PRP1 0x0 PRP2 0x0 00:18:46.883 [2024-05-15 02:35:27.542783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.883 [2024-05-15 02:35:27.542796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.883 [2024-05-15 02:35:27.542807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.883 [2024-05-15 02:35:27.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:8 PRP1 0x0 PRP2 0x0 00:18:46.883 [2024-05-15 02:35:27.542831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.883 [2024-05-15 02:35:27.542848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.883 [2024-05-15 02:35:27.542859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.883 [2024-05-15 02:35:27.542870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40 len:8 PRP1 0x0 PRP2 0x0 00:18:46.883 [2024-05-15 02:35:27.542883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.883 [2024-05-15 02:35:27.542895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.883 [2024-05-15 02:35:27.542906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.883 [2024-05-15 02:35:27.542917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48 len:8 PRP1 0x0 PRP2 0x0 00:18:46.883 [2024-05-15 02:35:27.542937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.883 [2024-05-15 02:35:27.542951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.883 [2024-05-15 02:35:27.542963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.883 [2024-05-15 02:35:27.542974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56 len:8 PRP1 0x0 PRP2 0x0 00:18:46.883 [2024-05-15 02:35:27.542987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.883 [2024-05-15 02:35:27.543000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.883 [2024-05-15 02:35:27.543011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.883 [2024-05-15 02:35:27.543022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130480 len:8 PRP1 0x0 PRP2 0x0 00:18:46.883 [2024-05-15 02:35:27.543034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.883 [2024-05-15 02:35:27.543093] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2329d10 was disconnected and freed. reset controller. 00:18:46.883 [2024-05-15 02:35:27.543111] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:46.883 [2024-05-15 02:35:27.543124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.883 [2024-05-15 02:35:27.546414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.883 [2024-05-15 02:35:27.546453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23062f0 (9): Bad file descriptor 00:18:46.883 [2024-05-15 02:35:27.577286] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.883 00:18:46.883 Latency(us) 00:18:46.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.883 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.883 Verification LBA range: start 0x0 length 0x4000 00:18:46.883 NVMe0n1 : 15.01 7685.53 30.02 272.17 0.00 16054.75 1165.08 20097.71 00:18:46.883 =================================================================================================================== 00:18:46.883 Total : 7685.53 30.02 272.17 0.00 16054.75 1165.08 20097.71 00:18:46.883 Received shutdown signal, test time was about 15.000000 seconds 00:18:46.883 00:18:46.883 Latency(us) 00:18:46.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.883 =================================================================================================================== 00:18:46.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2359648 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2359648 /var/tmp/bdevperf.sock 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2359648 ']' 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:46.883 02:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:46.883 [2024-05-15 02:35:34.169319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:46.883 02:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:47.163 [2024-05-15 02:35:34.410010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:47.163 02:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:47.420 NVMe0n1 00:18:47.420 02:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:47.985 00:18:47.985 02:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:48.241 00:18:48.241 02:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:48.241 02:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:48.498 02:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:48.755 02:35:36 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:52.031 02:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:52.031 02:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:52.031 02:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2360319 00:18:52.031 02:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.031 02:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2360319 00:18:53.404 0 00:18:53.405 02:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:53.405 [2024-05-15 02:35:33.642410] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:18:53.405 [2024-05-15 02:35:33.642500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359648 ] 00:18:53.405 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.405 [2024-05-15 02:35:33.714771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.405 [2024-05-15 02:35:33.820741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.405 [2024-05-15 02:35:35.993528] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:53.405 [2024-05-15 02:35:35.993612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.405 [2024-05-15 02:35:35.993636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.405 [2024-05-15 02:35:35.993652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.405 [2024-05-15 02:35:35.993666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.405 [2024-05-15 02:35:35.993679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.405 [2024-05-15 02:35:35.993697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.405 [2024-05-15 02:35:35.993711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.405 [2024-05-15 02:35:35.993725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.405 [2024-05-15 02:35:35.993738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:53.405 [2024-05-15 02:35:35.993785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:53.405 [2024-05-15 02:35:35.993814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d2f0 (9): Bad file descriptor 00:18:53.405 [2024-05-15 02:35:36.045608] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:53.405 Running I/O for 1 seconds... 00:18:53.405 00:18:53.405 Latency(us) 00:18:53.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.405 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:53.405 Verification LBA range: start 0x0 length 0x4000 00:18:53.405 NVMe0n1 : 1.05 8309.38 32.46 0.00 0.00 14753.08 3252.53 44079.03 00:18:53.405 =================================================================================================================== 00:18:53.405 Total : 8309.38 32.46 0.00 0.00 14753.08 3252.53 44079.03 00:18:53.405 02:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:53.405 02:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:53.405 02:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.662 02:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:53.662 02:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:53.919 02:35:41 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:54.177 02:35:41 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2359648 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2359648 ']' 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2359648 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2359648 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2359648' 00:18:57.454 killing process with pid 2359648 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2359648 00:18:57.454 02:35:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2359648 00:18:57.711 02:35:45 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:57.711 02:35:45 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.969 rmmod nvme_tcp 00:18:57.969 rmmod nvme_fabrics 00:18:57.969 rmmod nvme_keyring 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2357376 ']' 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2357376 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2357376 ']' 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2357376 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2357376 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2357376' 00:18:57.969 killing process with pid 2357376 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2357376 00:18:57.969 [2024-05-15 02:35:45.357298] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:57.969 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2357376 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.536 02:35:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.442 02:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.442 00:19:00.442 real 0m35.402s 00:19:00.442 user 1m58.165s 00:19:00.442 sys 0m7.996s 00:19:00.442 02:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.442 02:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.442 ************************************ 00:19:00.442 END TEST nvmf_failover 00:19:00.442 ************************************ 00:19:00.442 02:35:47 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:00.442 02:35:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:00.442 02:35:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.442 02:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.442 ************************************ 00:19:00.442 START TEST nvmf_host_discovery 00:19:00.442 ************************************ 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:00.442 * Looking for test storage... 00:19:00.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:00.442 02:35:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.443 02:35:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:02.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:02.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:02.972 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:02.972 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.972 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:02.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:19:02.973 00:19:02.973 --- 10.0.0.2 ping statistics --- 00:19:02.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.973 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:19:02.973 00:19:02.973 --- 10.0.0.1 ping statistics --- 00:19:02.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.973 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:02.973 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2363334 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2363334 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2363334 ']' 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:03.230 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.230 [2024-05-15 02:35:50.447332] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:19:03.231 [2024-05-15 02:35:50.447424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.231 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.231 [2024-05-15 02:35:50.533653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.488 [2024-05-15 02:35:50.654836] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.488 [2024-05-15 02:35:50.654891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.488 [2024-05-15 02:35:50.654907] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.488 [2024-05-15 02:35:50.654920] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.488 [2024-05-15 02:35:50.654940] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.488 [2024-05-15 02:35:50.654997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.488 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:03.488 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:19:03.488 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:03.488 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 [2024-05-15 02:35:50.801507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 [2024-05-15 02:35:50.809459] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:03.489 [2024-05-15 02:35:50.809723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 null0 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 null1 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2363356 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2363356 /tmp/host.sock 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2363356 ']' 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:03.489 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:03.489 02:35:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.489 [2024-05-15 02:35:50.881571] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:19:03.489 [2024-05-15 02:35:50.881653] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363356 ] 00:19:03.747 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.747 [2024-05-15 02:35:50.955113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.747 [2024-05-15 02:35:51.071098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:04.004 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.005 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.262 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 [2024-05-15 02:35:51.463440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:19:04.263 02:35:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:19:04.828 [2024-05-15 02:35:52.210246] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:04.828 [2024-05-15 02:35:52.210282] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:04.828 [2024-05-15 02:35:52.210315] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:05.086 [2024-05-15 02:35:52.296570] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:05.343 [2024-05-15 02:35:52.519827] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:05.343 [2024-05-15 02:35:52.519857] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:05.343 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.606 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.606 [2024-05-15 02:35:52.931756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:05.606 [2024-05-15 02:35:52.932162] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:05.607 [2024-05-15 02:35:52.932200] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.607 02:35:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:05.607 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.922 [2024-05-15 02:35:53.018542] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:05.922 02:35:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:19:06.180 [2024-05-15 02:35:53.326057] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:06.180 [2024-05-15 02:35:53.326093] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:06.180 [2024-05-15 02:35:53.326104] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:06.745 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.003 [2024-05-15 02:35:54.163764] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:07.003 [2024-05-15 02:35:54.163806] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:07.003 [2024-05-15 02:35:54.171054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.003 [2024-05-15 02:35:54.171087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.003 [2024-05-15 02:35:54.171105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.003 [2024-05-15 02:35:54.171119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.003 [2024-05-15 02:35:54.171133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.003 [2024-05-15 02:35:54.171147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.003 [2024-05-15 02:35:54.171161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.003 [2024-05-15 02:35:54.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.003 [2024-05-15 02:35:54.171192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.003 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:07.004 [2024-05-15 02:35:54.181046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.004 [2024-05-15 02:35:54.191092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.191384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.191585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.191610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.191626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.191649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.191677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.191692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.191726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.191746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 [2024-05-15 02:35:54.201181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.201460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.201672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.201700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.201717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.201741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.201764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.201779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.201793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.201814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 [2024-05-15 02:35:54.211250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.211510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.211739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.211770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.211787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.211812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.211835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.211850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.211864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.211885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:07.004 [2024-05-15 02:35:54.221338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.221615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.221836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.221864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.221882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.221906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.221952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.221997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.222011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.222030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 [2024-05-15 02:35:54.231418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.231688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.231925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.231958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.231974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.231996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.232029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.232047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.232060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.232079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 [2024-05-15 02:35:54.241494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.241728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.241935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.241964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.241996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.242018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.242039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.242058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.242071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.242104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.004 [2024-05-15 02:35:54.251570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:07.004 [2024-05-15 02:35:54.251816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.252023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.004 [2024-05-15 02:35:54.252050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7a840 with addr=10.0.0.2, port=4420 00:19:07.004 [2024-05-15 02:35:54.252066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7a840 is same with the state(5) to be set 00:19:07.004 [2024-05-15 02:35:54.252088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7a840 (9): Bad file descriptor 00:19:07.004 [2024-05-15 02:35:54.252140] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:07.004 [2024-05-15 02:35:54.252168] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:07.004 [2024-05-15 02:35:54.252201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:07.004 [2024-05-15 02:35:54.252238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:07.004 [2024-05-15 02:35:54.252254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:07.004 [2024-05-15 02:35:54.252280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:07.004 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:07.005 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.263 02:35:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.195 [2024-05-15 02:35:55.498785] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:08.195 [2024-05-15 02:35:55.498816] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:08.195 [2024-05-15 02:35:55.498844] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:08.195 [2024-05-15 02:35:55.587141] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:08.452 [2024-05-15 02:35:55.856150] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:08.452 [2024-05-15 02:35:55.856185] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.452 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.709 request: 00:19:08.709 { 00:19:08.709 "name": "nvme", 00:19:08.709 "trtype": "tcp", 00:19:08.709 "traddr": "10.0.0.2", 00:19:08.709 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:08.709 "adrfam": "ipv4", 00:19:08.709 "trsvcid": "8009", 00:19:08.709 "wait_for_attach": true, 00:19:08.709 "method": "bdev_nvme_start_discovery", 00:19:08.709 "req_id": 1 00:19:08.709 } 00:19:08.709 Got JSON-RPC error response 00:19:08.709 response: 00:19:08.709 { 00:19:08.709 "code": -17, 00:19:08.709 "message": "File exists" 00:19:08.709 } 00:19:08.709 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.709 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:08.709 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.709 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 request: 00:19:08.710 { 00:19:08.710 "name": "nvme_second", 00:19:08.710 "trtype": "tcp", 00:19:08.710 "traddr": "10.0.0.2", 00:19:08.710 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:08.710 "adrfam": "ipv4", 00:19:08.710 "trsvcid": "8009", 00:19:08.710 "wait_for_attach": true, 00:19:08.710 "method": "bdev_nvme_start_discovery", 00:19:08.710 "req_id": 1 00:19:08.710 } 00:19:08.710 Got JSON-RPC error response 00:19:08.710 response: 00:19:08.710 { 00:19:08.710 "code": -17, 00:19:08.710 "message": "File exists" 00:19:08.710 } 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:08.710 02:35:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.710 02:35:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.082 [2024-05-15 02:35:57.075604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.082 [2024-05-15 02:35:57.075806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.082 [2024-05-15 02:35:57.075836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96c20 with addr=10.0.0.2, port=8010 00:19:10.082 [2024-05-15 02:35:57.075862] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:10.082 [2024-05-15 02:35:57.075878] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:10.082 [2024-05-15 02:35:57.075892] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:11.013 [2024-05-15 02:35:58.078141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.013 [2024-05-15 02:35:58.078407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.013 [2024-05-15 02:35:58.078433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96c20 with addr=10.0.0.2, port=8010 00:19:11.013 [2024-05-15 02:35:58.078464] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:11.013 [2024-05-15 02:35:58.078480] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:11.013 [2024-05-15 02:35:58.078493] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:11.946 [2024-05-15 02:35:59.080269] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:11.946 request: 00:19:11.946 { 00:19:11.946 "name": "nvme_second", 00:19:11.946 "trtype": "tcp", 00:19:11.946 "traddr": "10.0.0.2", 00:19:11.946 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:11.946 "adrfam": "ipv4", 00:19:11.946 "trsvcid": "8010", 00:19:11.946 "attach_timeout_ms": 3000, 00:19:11.946 "method": "bdev_nvme_start_discovery", 00:19:11.946 "req_id": 1 00:19:11.946 } 00:19:11.946 Got JSON-RPC error response 00:19:11.946 response: 00:19:11.946 { 00:19:11.946 "code": -110, 00:19:11.946 "message": "Connection timed out" 00:19:11.946 } 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2363356 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.946 rmmod nvme_tcp 00:19:11.946 rmmod nvme_fabrics 00:19:11.946 rmmod nvme_keyring 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2363334 ']' 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2363334 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 2363334 ']' 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 2363334 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2363334 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2363334' 00:19:11.946 killing process with pid 2363334 00:19:11.946 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 2363334 00:19:11.947 [2024-05-15 02:35:59.208801] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:11.947 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 2363334 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.205 02:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.732 02:36:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.732 00:19:14.732 real 0m13.778s 00:19:14.732 user 0m19.525s 00:19:14.732 sys 0m3.115s 00:19:14.732 02:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:14.732 02:36:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.732 ************************************ 00:19:14.732 END TEST nvmf_host_discovery 00:19:14.732 ************************************ 00:19:14.732 02:36:01 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:14.732 02:36:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:14.733 02:36:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.733 02:36:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.733 ************************************ 00:19:14.733 START TEST nvmf_host_multipath_status 00:19:14.733 ************************************ 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:14.733 * Looking for test storage... 00:19:14.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.733 02:36:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:16.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:16.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:16.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.634 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:16.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.635 02:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.635 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.635 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.635 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:16.894 00:19:16.894 --- 10.0.0.2 ping statistics --- 00:19:16.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.894 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:19:16.894 00:19:16.894 --- 10.0.0.1 ping statistics --- 00:19:16.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.894 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2366933 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2366933 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2366933 ']' 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:16.894 02:36:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:16.894 [2024-05-15 02:36:04.197335] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:19:16.894 [2024-05-15 02:36:04.197418] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.894 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.894 [2024-05-15 02:36:04.271923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:17.152 [2024-05-15 02:36:04.384060] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.152 [2024-05-15 02:36:04.384122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.152 [2024-05-15 02:36:04.384136] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.152 [2024-05-15 02:36:04.384147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.152 [2024-05-15 02:36:04.384156] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.152 [2024-05-15 02:36:04.384207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.152 [2024-05-15 02:36:04.384211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2366933 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.085 [2024-05-15 02:36:05.392350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.085 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:18.344 Malloc0 00:19:18.344 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:18.601 02:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.859 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.116 [2024-05-15 02:36:06.421754] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:19.116 [2024-05-15 02:36:06.422041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.116 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:19.374 [2024-05-15 02:36:06.670690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2367230 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2367230 /var/tmp/bdevperf.sock 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2367230 ']' 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.374 02:36:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:19.631 02:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.631 02:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:19.631 02:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:19.888 02:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:20.485 Nvme0n1 00:19:20.485 02:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:20.744 Nvme0n1 00:19:20.744 02:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:20.744 02:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:23.272 02:36:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:23.273 02:36:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:23.273 02:36:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:23.273 02:36:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:24.645 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:24.645 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:24.646 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.646 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:24.646 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.646 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:24.646 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.646 02:36:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:24.903 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:24.903 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:24.903 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.903 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.160 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.160 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.160 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.160 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:25.419 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.419 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:25.419 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.419 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:25.677 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.678 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:25.678 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.678 02:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.936 02:36:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.936 02:36:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:25.936 02:36:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:26.194 02:36:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:26.451 02:36:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:27.385 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:27.385 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:27.385 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.385 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:27.643 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:27.643 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:27.643 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.643 02:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:27.901 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.901 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:27.901 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.901 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:28.159 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.159 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:28.159 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.159 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.417 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.417 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:28.417 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.417 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.675 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.675 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:28.675 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.675 02:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:28.932 02:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.932 02:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:28.932 02:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:29.189 02:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:29.447 02:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:30.380 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:30.380 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:30.380 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.380 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:30.638 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.638 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:30.638 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.638 02:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:30.895 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:30.895 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:30.895 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.895 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:31.153 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.153 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:31.153 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.153 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:31.412 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.412 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:31.412 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.412 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:31.670 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.670 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:31.670 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.670 02:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:31.928 02:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.928 02:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:31.928 02:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:32.186 02:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:32.444 02:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:33.377 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:33.377 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:33.377 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.377 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:33.633 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.633 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:33.633 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.633 02:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:33.890 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:33.890 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:33.890 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.890 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:34.147 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.147 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:34.147 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.147 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:34.404 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.404 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:34.404 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.404 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:34.662 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.662 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:34.662 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.662 02:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:34.924 02:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:34.924 02:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:34.924 02:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:35.226 02:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:35.483 02:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:36.413 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:36.413 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:36.413 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.413 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:36.670 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:36.670 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:36.670 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.670 02:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:36.927 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:36.927 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:36.927 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.927 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:37.184 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.184 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:37.184 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.184 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:37.441 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.441 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:37.441 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.441 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:37.697 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:37.697 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:37.697 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.697 02:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:37.953 02:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:37.953 02:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:37.953 02:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:38.211 02:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:38.211 02:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.583 02:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:39.841 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.841 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:39.841 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.841 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:40.099 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.099 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:40.099 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.099 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:40.356 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.356 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:40.356 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.356 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:40.614 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.614 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:40.614 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.614 02:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:40.872 02:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.872 02:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:41.130 02:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:41.130 02:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:41.387 02:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:41.645 02:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:42.578 02:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:42.578 02:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:42.578 02:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.578 02:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:42.836 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:42.836 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:42.836 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.836 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:43.095 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.095 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:43.095 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.095 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:43.352 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.352 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:43.353 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.353 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:43.610 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.610 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:43.610 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.610 02:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:43.868 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.868 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:43.868 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.868 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:44.126 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.126 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:44.126 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:44.383 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:44.641 02:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:45.573 02:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:45.573 02:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:45.573 02:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.573 02:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:45.830 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:45.830 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:45.830 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.830 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:46.087 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.087 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:46.087 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.087 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:46.344 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.344 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:46.344 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.344 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:46.601 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.601 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:46.601 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.601 02:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:46.858 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.858 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:46.858 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.858 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:47.116 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.116 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:47.116 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:47.373 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:47.630 02:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:48.567 02:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:48.567 02:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:48.567 02:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.567 02:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:48.824 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.824 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:48.824 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.824 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:49.081 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.081 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:49.081 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.081 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:49.338 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.338 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:49.338 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.338 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:49.595 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.595 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:49.595 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.595 02:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:49.852 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.853 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:49.853 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.853 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:50.110 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.110 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:50.110 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:50.367 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:50.624 02:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:51.995 02:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:51.995 02:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:51.995 02:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.995 02:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:51.995 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.995 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:51.995 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.995 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.252 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:52.252 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.252 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.252 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.509 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.509 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.509 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.509 02:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.767 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.767 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.767 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.767 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.024 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.024 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:53.024 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.024 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2367230 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2367230 ']' 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2367230 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2367230 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2367230' 00:19:53.281 killing process with pid 2367230 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2367230 00:19:53.281 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2367230 00:19:53.549 Connection closed with partial response: 00:19:53.549 00:19:53.549 00:19:53.549 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2367230 00:19:53.549 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:53.549 [2024-05-15 02:36:06.727712] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:19:53.549 [2024-05-15 02:36:06.727801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367230 ] 00:19:53.549 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.549 [2024-05-15 02:36:06.802192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.549 [2024-05-15 02:36:06.908248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.549 Running I/O for 90 seconds... 00:19:53.549 [2024-05-15 02:36:22.389873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.389960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.549 [2024-05-15 02:36:22.390324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.549 [2024-05-15 02:36:22.390926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.549 [2024-05-15 02:36:22.390952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.390981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.390999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.391981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.391999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.392894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.392911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.550 [2024-05-15 02:36:22.393306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.550 [2024-05-15 02:36:22.393323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.551 [2024-05-15 02:36:22.393790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.393970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.393996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.394971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.394988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.395015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.395032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.395058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.395075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.551 [2024-05-15 02:36:22.395101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.551 [2024-05-15 02:36:22.395118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.395973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.395990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:22.396496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:22.396512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.967699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.967769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.967810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.552 [2024-05-15 02:36:37.967829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.967854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.967871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.967894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.967912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.967942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.967960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.967993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.552 [2024-05-15 02:36:37.968052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.552 [2024-05-15 02:36:37.968093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.552 [2024-05-15 02:36:37.968134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.552 [2024-05-15 02:36:37.968173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.968215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.968273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.968329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.968995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.552 [2024-05-15 02:36:37.969020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.969045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.552 [2024-05-15 02:36:37.969063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.552 [2024-05-15 02:36:37.969086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.969890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.969927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.969982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.970000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.970023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.970062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.970078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.970103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.970119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.970141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.553 [2024-05-15 02:36:37.970158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.553 [2024-05-15 02:36:37.970180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.553 [2024-05-15 02:36:37.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.970223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.970254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.970278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.970294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.970315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.970332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.972704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.972844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.972860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.973968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.973986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.554 [2024-05-15 02:36:37.974260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.974298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.974342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.554 [2024-05-15 02:36:37.974365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.554 [2024-05-15 02:36:37.974382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.974421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.974459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.974498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.974617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.974656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.974733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.974975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.974998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.975581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.975620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.975659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.975737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.975797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.975927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.975961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.975978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.976017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.976056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.976095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.976133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.976172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.976210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.976248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.976291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.976315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.976332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.977453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.977498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.555 [2024-05-15 02:36:37.977538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.977576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.977615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.977669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.555 [2024-05-15 02:36:37.977691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.555 [2024-05-15 02:36:37.977709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.977748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.977787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.977824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.977866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.977905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.977969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.977994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.978010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.978129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.978209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.978372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.978397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.978414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.979961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.979984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.980125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.980297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.980335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.980411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.556 [2024-05-15 02:36:37.980488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.556 [2024-05-15 02:36:37.980525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.556 [2024-05-15 02:36:37.980547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.980563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.980615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.980654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.980692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.980730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.980807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.980844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.980883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.980944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.980970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.980987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.981009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.981026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.981049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.981065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.981088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.981109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.982629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.982675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.982715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.982756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.982796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.982836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.982891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.982939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.982981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.983000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.983039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.983081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.983618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.983681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.983723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.983763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.983803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.983843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.983883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.983923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.983973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.983996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.984013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.984054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.984093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.984133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.984178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.984217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.557 [2024-05-15 02:36:37.984258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.557 [2024-05-15 02:36:37.984281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.557 [2024-05-15 02:36:37.984299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.984338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.984355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.984378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.984395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.985873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.985898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.985926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.985954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.985977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.985995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.986841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.986967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.986986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.987009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.987026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.987049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.987066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.987089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.558 [2024-05-15 02:36:37.987106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.558 [2024-05-15 02:36:37.987129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.558 [2024-05-15 02:36:37.987150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.987174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.987191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.987214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.987231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.988687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.988733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.988775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.988815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.988877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.988942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.988967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.988984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.989006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.989023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.989045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.989062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.989084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.989101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.989130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.989147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.559 [2024-05-15 02:36:37.990924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.990959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.990977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.991001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.991018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.991040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.991061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.991084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.991100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.991123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.991145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.559 [2024-05-15 02:36:37.991169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.559 [2024-05-15 02:36:37.991187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.991210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.991227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.991266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.991289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.991306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.992997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.993746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.993789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.993806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.994466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.994551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.994628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.994667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.994962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.560 [2024-05-15 02:36:37.994980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.995003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.995020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.997652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.997678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.997706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.997725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.997748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.997765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.997789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.560 [2024-05-15 02:36:37.997821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.560 [2024-05-15 02:36:37.997844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.997861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.997900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.997917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.997949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.997978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:37.998840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:37.998939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:37.998966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.013534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.013695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.013806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.013843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.013880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.561 [2024-05-15 02:36:38.013947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.013974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.014007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.014031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.014048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.014071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.561 [2024-05-15 02:36:38.014088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.561 [2024-05-15 02:36:38.014116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.014134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.014894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.014920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.014958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.014979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.015019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.015059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.015099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.015139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.015180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.015234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.015257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.015274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.017854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.017959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.017985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.018002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.018025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.562 [2024-05-15 02:36:38.018041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.562 [2024-05-15 02:36:38.018064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.562 [2024-05-15 02:36:38.018081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.018735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.018797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.018814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.019678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.019777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.019815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.019889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.019951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.019991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.020465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.020511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.020551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.020710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.563 [2024-05-15 02:36:38.020750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.020892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.563 [2024-05-15 02:36:38.020910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.563 [2024-05-15 02:36:38.021602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.021631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.021678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.021732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.021770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.021807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.021861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.021900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.021947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.021977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.022794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.022896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.022913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.024532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.024578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.024698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.024825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.024965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.564 [2024-05-15 02:36:38.024985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.564 [2024-05-15 02:36:38.025009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.564 [2024-05-15 02:36:38.025026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.025143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.025262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.025345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.025441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.025501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.025518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.026254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.026279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.026341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.026362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.026385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.026402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.026424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.026440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.026477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.026494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.028339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.028715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.565 [2024-05-15 02:36:38.028864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.565 [2024-05-15 02:36:38.028886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.565 [2024-05-15 02:36:38.028916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.565 Received shutdown signal, test time was about 32.272641 seconds 00:19:53.565 00:19:53.565 Latency(us) 00:19:53.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.565 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.565 Verification LBA range: start 0x0 length 0x4000 00:19:53.565 Nvme0n1 : 32.27 8073.63 31.54 0.00 0.00 15807.74 494.55 4026531.84 00:19:53.565 =================================================================================================================== 00:19:53.565 Total : 8073.63 31.54 0.00 0.00 15807.74 494.55 4026531.84 00:19:53.565 02:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.823 rmmod nvme_tcp 00:19:53.823 rmmod nvme_fabrics 00:19:53.823 rmmod nvme_keyring 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2366933 ']' 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2366933 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2366933 ']' 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2366933 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2366933 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2366933' 00:19:53.823 killing process with pid 2366933 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2366933 00:19:53.823 [2024-05-15 02:36:41.178000] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:53.823 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2366933 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.081 02:36:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.610 02:36:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.610 00:19:56.610 real 0m41.906s 00:19:56.610 user 2m4.938s 00:19:56.610 sys 0m10.677s 00:19:56.610 02:36:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:56.610 02:36:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:56.610 ************************************ 00:19:56.610 END TEST nvmf_host_multipath_status 00:19:56.610 ************************************ 00:19:56.610 02:36:43 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:56.610 02:36:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:56.610 02:36:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:56.610 02:36:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.610 ************************************ 00:19:56.610 START TEST nvmf_discovery_remove_ifc 00:19:56.610 ************************************ 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:56.610 * Looking for test storage... 00:19:56.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.610 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.611 02:36:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.136 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:19:59.137 00:19:59.137 --- 10.0.0.2 ping statistics --- 00:19:59.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.137 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:19:59.137 00:19:59.137 --- 10.0.0.1 ping statistics --- 00:19:59.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.137 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2374237 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2374237 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2374237 ']' 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.137 02:36:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.137 [2024-05-15 02:36:46.253447] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:19:59.137 [2024-05-15 02:36:46.253526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.137 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.137 [2024-05-15 02:36:46.328688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.137 [2024-05-15 02:36:46.433332] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.137 [2024-05-15 02:36:46.433385] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.137 [2024-05-15 02:36:46.433413] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.137 [2024-05-15 02:36:46.433423] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.137 [2024-05-15 02:36:46.433433] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.137 [2024-05-15 02:36:46.433459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.069 [2024-05-15 02:36:47.264536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.069 [2024-05-15 02:36:47.272480] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:00.069 [2024-05-15 02:36:47.272764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:00.069 null0 00:20:00.069 [2024-05-15 02:36:47.304661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.069 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2374386 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2374386 /tmp/host.sock 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2374386 ']' 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:00.070 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:00.070 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.070 [2024-05-15 02:36:47.368256] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:20:00.070 [2024-05-15 02:36:47.368320] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374386 ] 00:20:00.070 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.070 [2024-05-15 02:36:47.439889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.328 [2024-05-15 02:36:47.556827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.328 02:36:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.699 [2024-05-15 02:36:48.784183] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:01.699 [2024-05-15 02:36:48.784248] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:01.699 [2024-05-15 02:36:48.784277] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:01.699 [2024-05-15 02:36:48.870540] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:01.699 [2024-05-15 02:36:48.973596] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:01.699 [2024-05-15 02:36:48.973665] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:01.699 [2024-05-15 02:36:48.973708] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:01.699 [2024-05-15 02:36:48.973736] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:01.700 [2024-05-15 02:36:48.973779] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.700 [2024-05-15 02:36:48.981545] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1995010 was disconnected and freed. delete nvme_qpair. 00:20:01.700 02:36:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.700 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.957 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:01.957 02:36:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:02.890 02:36:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.829 02:36:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:05.213 02:36:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:06.150 02:36:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:07.087 02:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.087 [2024-05-15 02:36:54.414353] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:07.087 [2024-05-15 02:36:54.414422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.087 [2024-05-15 02:36:54.414447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.087 [2024-05-15 02:36:54.414469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.087 [2024-05-15 02:36:54.414485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.087 [2024-05-15 02:36:54.414500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.087 [2024-05-15 02:36:54.414516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.087 [2024-05-15 02:36:54.414532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.087 [2024-05-15 02:36:54.414547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.087 [2024-05-15 02:36:54.414563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.087 [2024-05-15 02:36:54.414577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.087 [2024-05-15 02:36:54.414593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195c380 is same with the state(5) to be set 00:20:07.087 [2024-05-15 02:36:54.424372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195c380 (9): Bad file descriptor 00:20:07.087 [2024-05-15 02:36:54.434423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:08.021 02:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:08.279 [2024-05-15 02:36:55.436999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:09.211 [2024-05-15 02:36:56.461001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:09.211 [2024-05-15 02:36:56.461044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195c380 with addr=10.0.0.2, port=4420 00:20:09.211 [2024-05-15 02:36:56.461072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195c380 is same with the state(5) to be set 00:20:09.211 [2024-05-15 02:36:56.461594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195c380 (9): Bad file descriptor 00:20:09.211 [2024-05-15 02:36:56.461645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.211 [2024-05-15 02:36:56.461690] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:09.211 [2024-05-15 02:36:56.461732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.211 [2024-05-15 02:36:56.461758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.211 [2024-05-15 02:36:56.461781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.211 [2024-05-15 02:36:56.461797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.211 [2024-05-15 02:36:56.461813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.211 [2024-05-15 02:36:56.461828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.211 [2024-05-15 02:36:56.461843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.212 [2024-05-15 02:36:56.461858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.212 [2024-05-15 02:36:56.461874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.212 [2024-05-15 02:36:56.461889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.212 [2024-05-15 02:36:56.461904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:09.212 [2024-05-15 02:36:56.462107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b810 (9): Bad file descriptor 00:20:09.212 [2024-05-15 02:36:56.463129] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:09.212 [2024-05-15 02:36:56.463151] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:09.212 02:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.212 02:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:09.212 02:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.143 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:10.401 02:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.334 [2024-05-15 02:36:58.478940] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:11.334 [2024-05-15 02:36:58.478975] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:11.334 [2024-05-15 02:36:58.478998] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:11.334 [2024-05-15 02:36:58.567276] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:11.334 02:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.334 [2024-05-15 02:36:58.669492] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:11.335 [2024-05-15 02:36:58.669541] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:11.335 [2024-05-15 02:36:58.669571] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:11.335 [2024-05-15 02:36:58.669595] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:11.335 [2024-05-15 02:36:58.669610] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:11.335 [2024-05-15 02:36:58.677330] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x199f6f0 was disconnected and freed. delete nvme_qpair. 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.267 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2374386 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2374386 ']' 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2374386 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2374386 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2374386' 00:20:12.525 killing process with pid 2374386 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2374386 00:20:12.525 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2374386 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.783 02:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.783 rmmod nvme_tcp 00:20:12.783 rmmod nvme_fabrics 00:20:12.783 rmmod nvme_keyring 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2374237 ']' 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2374237 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2374237 ']' 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2374237 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2374237 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2374237' 00:20:12.783 killing process with pid 2374237 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2374237 00:20:12.783 [2024-05-15 02:37:00.065189] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:12.783 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2374237 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.040 02:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.568 02:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.568 00:20:15.568 real 0m18.806s 00:20:15.568 user 0m25.653s 00:20:15.568 sys 0m3.359s 00:20:15.568 02:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:15.568 02:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:15.568 ************************************ 00:20:15.568 END TEST nvmf_discovery_remove_ifc 00:20:15.568 ************************************ 00:20:15.568 02:37:02 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:15.568 02:37:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:15.568 02:37:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:15.568 02:37:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.568 ************************************ 00:20:15.568 START TEST nvmf_identify_kernel_target 00:20:15.568 ************************************ 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:15.568 * Looking for test storage... 00:20:15.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.568 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.569 02:37:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:18.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:18.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:18.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:18.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.097 02:37:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:18.097 00:20:18.097 --- 10.0.0.2 ping statistics --- 00:20:18.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.097 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:18.097 00:20:18.097 --- 10.0.0.1 ping statistics --- 00:20:18.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.097 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.097 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:18.098 02:37:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:20:19.033 Waiting for block devices as requested 00:20:19.291 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:19.291 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:19.291 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:19.291 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:19.549 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:19.549 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:19.549 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:19.549 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:19.830 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:19.830 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:19.830 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:19.830 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:20.099 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:20.099 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:20.099 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:20.099 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:20.099 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:20.359 No valid GPT data, bailing 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:20.359 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:20:20.359 00:20:20.359 Discovery Log Number of Records 2, Generation counter 2 00:20:20.359 =====Discovery Log Entry 0====== 00:20:20.359 trtype: tcp 00:20:20.359 adrfam: ipv4 00:20:20.359 subtype: current discovery subsystem 00:20:20.359 treq: not specified, sq flow control disable supported 00:20:20.359 portid: 1 00:20:20.359 trsvcid: 4420 00:20:20.359 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:20.359 traddr: 10.0.0.1 00:20:20.359 eflags: none 00:20:20.359 sectype: none 00:20:20.359 =====Discovery Log Entry 1====== 00:20:20.359 trtype: tcp 00:20:20.359 adrfam: ipv4 00:20:20.359 subtype: nvme subsystem 00:20:20.359 treq: not specified, sq flow control disable supported 00:20:20.359 portid: 1 00:20:20.359 trsvcid: 4420 00:20:20.359 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:20.359 traddr: 10.0.0.1 00:20:20.359 eflags: none 00:20:20.359 sectype: none 00:20:20.360 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:20.360 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:20.360 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.360 ===================================================== 00:20:20.360 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:20.360 ===================================================== 00:20:20.360 Controller Capabilities/Features 00:20:20.360 ================================ 00:20:20.360 Vendor ID: 0000 00:20:20.360 Subsystem Vendor ID: 0000 00:20:20.360 Serial Number: f075cd8a1957fc112734 00:20:20.360 Model Number: Linux 00:20:20.360 Firmware Version: 6.7.0-68 00:20:20.360 Recommended Arb Burst: 0 00:20:20.360 IEEE OUI Identifier: 00 00 00 00:20:20.360 Multi-path I/O 00:20:20.360 May have multiple subsystem ports: No 00:20:20.360 May have multiple controllers: No 00:20:20.360 Associated with SR-IOV VF: No 00:20:20.360 Max Data Transfer Size: Unlimited 00:20:20.360 Max Number of Namespaces: 0 00:20:20.360 Max Number of I/O Queues: 1024 00:20:20.360 NVMe Specification Version (VS): 1.3 00:20:20.360 NVMe Specification Version (Identify): 1.3 00:20:20.360 Maximum Queue Entries: 1024 00:20:20.360 Contiguous Queues Required: No 00:20:20.360 Arbitration Mechanisms Supported 00:20:20.360 Weighted Round Robin: Not Supported 00:20:20.360 Vendor Specific: Not Supported 00:20:20.360 Reset Timeout: 7500 ms 00:20:20.360 Doorbell Stride: 4 bytes 00:20:20.360 NVM Subsystem Reset: Not Supported 00:20:20.360 Command Sets Supported 00:20:20.360 NVM Command Set: Supported 00:20:20.360 Boot Partition: Not Supported 00:20:20.360 Memory Page Size Minimum: 4096 bytes 00:20:20.360 Memory Page Size Maximum: 4096 bytes 00:20:20.360 Persistent Memory Region: Not Supported 00:20:20.360 Optional Asynchronous Events Supported 00:20:20.360 Namespace Attribute Notices: Not Supported 00:20:20.360 Firmware Activation Notices: Not Supported 00:20:20.360 ANA Change Notices: Not Supported 00:20:20.360 PLE Aggregate Log Change Notices: Not Supported 00:20:20.360 LBA Status Info Alert Notices: Not Supported 00:20:20.360 EGE Aggregate Log Change Notices: Not Supported 00:20:20.360 Normal NVM Subsystem Shutdown event: Not Supported 00:20:20.360 Zone Descriptor Change Notices: Not Supported 00:20:20.360 Discovery Log Change Notices: Supported 00:20:20.360 Controller Attributes 00:20:20.360 128-bit Host Identifier: Not Supported 00:20:20.360 Non-Operational Permissive Mode: Not Supported 00:20:20.360 NVM Sets: Not Supported 00:20:20.360 Read Recovery Levels: Not Supported 00:20:20.360 Endurance Groups: Not Supported 00:20:20.360 Predictable Latency Mode: Not Supported 00:20:20.360 Traffic Based Keep ALive: Not Supported 00:20:20.360 Namespace Granularity: Not Supported 00:20:20.360 SQ Associations: Not Supported 00:20:20.360 UUID List: Not Supported 00:20:20.360 Multi-Domain Subsystem: Not Supported 00:20:20.360 Fixed Capacity Management: Not Supported 00:20:20.360 Variable Capacity Management: Not Supported 00:20:20.360 Delete Endurance Group: Not Supported 00:20:20.360 Delete NVM Set: Not Supported 00:20:20.360 Extended LBA Formats Supported: Not Supported 00:20:20.360 Flexible Data Placement Supported: Not Supported 00:20:20.360 00:20:20.360 Controller Memory Buffer Support 00:20:20.360 ================================ 00:20:20.360 Supported: No 00:20:20.360 00:20:20.360 Persistent Memory Region Support 00:20:20.360 ================================ 00:20:20.360 Supported: No 00:20:20.360 00:20:20.360 Admin Command Set Attributes 00:20:20.360 ============================ 00:20:20.360 Security Send/Receive: Not Supported 00:20:20.360 Format NVM: Not Supported 00:20:20.360 Firmware Activate/Download: Not Supported 00:20:20.360 Namespace Management: Not Supported 00:20:20.360 Device Self-Test: Not Supported 00:20:20.360 Directives: Not Supported 00:20:20.360 NVMe-MI: Not Supported 00:20:20.360 Virtualization Management: Not Supported 00:20:20.360 Doorbell Buffer Config: Not Supported 00:20:20.360 Get LBA Status Capability: Not Supported 00:20:20.360 Command & Feature Lockdown Capability: Not Supported 00:20:20.360 Abort Command Limit: 1 00:20:20.360 Async Event Request Limit: 1 00:20:20.360 Number of Firmware Slots: N/A 00:20:20.360 Firmware Slot 1 Read-Only: N/A 00:20:20.360 Firmware Activation Without Reset: N/A 00:20:20.360 Multiple Update Detection Support: N/A 00:20:20.360 Firmware Update Granularity: No Information Provided 00:20:20.360 Per-Namespace SMART Log: No 00:20:20.360 Asymmetric Namespace Access Log Page: Not Supported 00:20:20.360 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:20.360 Command Effects Log Page: Not Supported 00:20:20.360 Get Log Page Extended Data: Supported 00:20:20.360 Telemetry Log Pages: Not Supported 00:20:20.360 Persistent Event Log Pages: Not Supported 00:20:20.360 Supported Log Pages Log Page: May Support 00:20:20.360 Commands Supported & Effects Log Page: Not Supported 00:20:20.360 Feature Identifiers & Effects Log Page:May Support 00:20:20.360 NVMe-MI Commands & Effects Log Page: May Support 00:20:20.360 Data Area 4 for Telemetry Log: Not Supported 00:20:20.360 Error Log Page Entries Supported: 1 00:20:20.360 Keep Alive: Not Supported 00:20:20.360 00:20:20.360 NVM Command Set Attributes 00:20:20.360 ========================== 00:20:20.360 Submission Queue Entry Size 00:20:20.360 Max: 1 00:20:20.360 Min: 1 00:20:20.360 Completion Queue Entry Size 00:20:20.360 Max: 1 00:20:20.360 Min: 1 00:20:20.360 Number of Namespaces: 0 00:20:20.360 Compare Command: Not Supported 00:20:20.360 Write Uncorrectable Command: Not Supported 00:20:20.360 Dataset Management Command: Not Supported 00:20:20.360 Write Zeroes Command: Not Supported 00:20:20.360 Set Features Save Field: Not Supported 00:20:20.360 Reservations: Not Supported 00:20:20.360 Timestamp: Not Supported 00:20:20.360 Copy: Not Supported 00:20:20.360 Volatile Write Cache: Not Present 00:20:20.360 Atomic Write Unit (Normal): 1 00:20:20.360 Atomic Write Unit (PFail): 1 00:20:20.360 Atomic Compare & Write Unit: 1 00:20:20.360 Fused Compare & Write: Not Supported 00:20:20.360 Scatter-Gather List 00:20:20.360 SGL Command Set: Supported 00:20:20.360 SGL Keyed: Not Supported 00:20:20.360 SGL Bit Bucket Descriptor: Not Supported 00:20:20.360 SGL Metadata Pointer: Not Supported 00:20:20.360 Oversized SGL: Not Supported 00:20:20.360 SGL Metadata Address: Not Supported 00:20:20.360 SGL Offset: Supported 00:20:20.360 Transport SGL Data Block: Not Supported 00:20:20.360 Replay Protected Memory Block: Not Supported 00:20:20.360 00:20:20.360 Firmware Slot Information 00:20:20.360 ========================= 00:20:20.360 Active slot: 0 00:20:20.360 00:20:20.360 00:20:20.360 Error Log 00:20:20.360 ========= 00:20:20.360 00:20:20.360 Active Namespaces 00:20:20.360 ================= 00:20:20.360 Discovery Log Page 00:20:20.360 ================== 00:20:20.360 Generation Counter: 2 00:20:20.360 Number of Records: 2 00:20:20.360 Record Format: 0 00:20:20.360 00:20:20.360 Discovery Log Entry 0 00:20:20.360 ---------------------- 00:20:20.360 Transport Type: 3 (TCP) 00:20:20.360 Address Family: 1 (IPv4) 00:20:20.360 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:20.360 Entry Flags: 00:20:20.360 Duplicate Returned Information: 0 00:20:20.360 Explicit Persistent Connection Support for Discovery: 0 00:20:20.360 Transport Requirements: 00:20:20.360 Secure Channel: Not Specified 00:20:20.360 Port ID: 1 (0x0001) 00:20:20.360 Controller ID: 65535 (0xffff) 00:20:20.360 Admin Max SQ Size: 32 00:20:20.360 Transport Service Identifier: 4420 00:20:20.360 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:20.360 Transport Address: 10.0.0.1 00:20:20.360 Discovery Log Entry 1 00:20:20.360 ---------------------- 00:20:20.360 Transport Type: 3 (TCP) 00:20:20.360 Address Family: 1 (IPv4) 00:20:20.360 Subsystem Type: 2 (NVM Subsystem) 00:20:20.360 Entry Flags: 00:20:20.360 Duplicate Returned Information: 0 00:20:20.360 Explicit Persistent Connection Support for Discovery: 0 00:20:20.360 Transport Requirements: 00:20:20.360 Secure Channel: Not Specified 00:20:20.360 Port ID: 1 (0x0001) 00:20:20.360 Controller ID: 65535 (0xffff) 00:20:20.360 Admin Max SQ Size: 32 00:20:20.360 Transport Service Identifier: 4420 00:20:20.360 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:20.360 Transport Address: 10.0.0.1 00:20:20.360 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:20.360 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.360 get_feature(0x01) failed 00:20:20.360 get_feature(0x02) failed 00:20:20.360 get_feature(0x04) failed 00:20:20.360 ===================================================== 00:20:20.360 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:20.360 ===================================================== 00:20:20.360 Controller Capabilities/Features 00:20:20.360 ================================ 00:20:20.360 Vendor ID: 0000 00:20:20.361 Subsystem Vendor ID: 0000 00:20:20.361 Serial Number: 4f1f690ea9f4fdaee73c 00:20:20.361 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:20.361 Firmware Version: 6.7.0-68 00:20:20.361 Recommended Arb Burst: 6 00:20:20.361 IEEE OUI Identifier: 00 00 00 00:20:20.361 Multi-path I/O 00:20:20.361 May have multiple subsystem ports: Yes 00:20:20.361 May have multiple controllers: Yes 00:20:20.361 Associated with SR-IOV VF: No 00:20:20.361 Max Data Transfer Size: Unlimited 00:20:20.361 Max Number of Namespaces: 1024 00:20:20.361 Max Number of I/O Queues: 128 00:20:20.361 NVMe Specification Version (VS): 1.3 00:20:20.361 NVMe Specification Version (Identify): 1.3 00:20:20.361 Maximum Queue Entries: 1024 00:20:20.361 Contiguous Queues Required: No 00:20:20.361 Arbitration Mechanisms Supported 00:20:20.361 Weighted Round Robin: Not Supported 00:20:20.361 Vendor Specific: Not Supported 00:20:20.361 Reset Timeout: 7500 ms 00:20:20.361 Doorbell Stride: 4 bytes 00:20:20.361 NVM Subsystem Reset: Not Supported 00:20:20.361 Command Sets Supported 00:20:20.361 NVM Command Set: Supported 00:20:20.361 Boot Partition: Not Supported 00:20:20.361 Memory Page Size Minimum: 4096 bytes 00:20:20.361 Memory Page Size Maximum: 4096 bytes 00:20:20.361 Persistent Memory Region: Not Supported 00:20:20.361 Optional Asynchronous Events Supported 00:20:20.361 Namespace Attribute Notices: Supported 00:20:20.361 Firmware Activation Notices: Not Supported 00:20:20.361 ANA Change Notices: Supported 00:20:20.361 PLE Aggregate Log Change Notices: Not Supported 00:20:20.361 LBA Status Info Alert Notices: Not Supported 00:20:20.361 EGE Aggregate Log Change Notices: Not Supported 00:20:20.361 Normal NVM Subsystem Shutdown event: Not Supported 00:20:20.361 Zone Descriptor Change Notices: Not Supported 00:20:20.361 Discovery Log Change Notices: Not Supported 00:20:20.361 Controller Attributes 00:20:20.361 128-bit Host Identifier: Supported 00:20:20.361 Non-Operational Permissive Mode: Not Supported 00:20:20.361 NVM Sets: Not Supported 00:20:20.361 Read Recovery Levels: Not Supported 00:20:20.361 Endurance Groups: Not Supported 00:20:20.361 Predictable Latency Mode: Not Supported 00:20:20.361 Traffic Based Keep ALive: Supported 00:20:20.361 Namespace Granularity: Not Supported 00:20:20.361 SQ Associations: Not Supported 00:20:20.361 UUID List: Not Supported 00:20:20.361 Multi-Domain Subsystem: Not Supported 00:20:20.361 Fixed Capacity Management: Not Supported 00:20:20.361 Variable Capacity Management: Not Supported 00:20:20.361 Delete Endurance Group: Not Supported 00:20:20.361 Delete NVM Set: Not Supported 00:20:20.361 Extended LBA Formats Supported: Not Supported 00:20:20.361 Flexible Data Placement Supported: Not Supported 00:20:20.361 00:20:20.361 Controller Memory Buffer Support 00:20:20.361 ================================ 00:20:20.361 Supported: No 00:20:20.361 00:20:20.361 Persistent Memory Region Support 00:20:20.361 ================================ 00:20:20.361 Supported: No 00:20:20.361 00:20:20.361 Admin Command Set Attributes 00:20:20.361 ============================ 00:20:20.361 Security Send/Receive: Not Supported 00:20:20.361 Format NVM: Not Supported 00:20:20.361 Firmware Activate/Download: Not Supported 00:20:20.361 Namespace Management: Not Supported 00:20:20.361 Device Self-Test: Not Supported 00:20:20.361 Directives: Not Supported 00:20:20.361 NVMe-MI: Not Supported 00:20:20.361 Virtualization Management: Not Supported 00:20:20.361 Doorbell Buffer Config: Not Supported 00:20:20.361 Get LBA Status Capability: Not Supported 00:20:20.361 Command & Feature Lockdown Capability: Not Supported 00:20:20.361 Abort Command Limit: 4 00:20:20.361 Async Event Request Limit: 4 00:20:20.361 Number of Firmware Slots: N/A 00:20:20.361 Firmware Slot 1 Read-Only: N/A 00:20:20.361 Firmware Activation Without Reset: N/A 00:20:20.361 Multiple Update Detection Support: N/A 00:20:20.361 Firmware Update Granularity: No Information Provided 00:20:20.361 Per-Namespace SMART Log: Yes 00:20:20.361 Asymmetric Namespace Access Log Page: Supported 00:20:20.361 ANA Transition Time : 10 sec 00:20:20.361 00:20:20.361 Asymmetric Namespace Access Capabilities 00:20:20.361 ANA Optimized State : Supported 00:20:20.361 ANA Non-Optimized State : Supported 00:20:20.361 ANA Inaccessible State : Supported 00:20:20.361 ANA Persistent Loss State : Supported 00:20:20.361 ANA Change State : Supported 00:20:20.361 ANAGRPID is not changed : No 00:20:20.361 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:20.361 00:20:20.361 ANA Group Identifier Maximum : 128 00:20:20.361 Number of ANA Group Identifiers : 128 00:20:20.361 Max Number of Allowed Namespaces : 1024 00:20:20.361 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:20.361 Command Effects Log Page: Supported 00:20:20.361 Get Log Page Extended Data: Supported 00:20:20.361 Telemetry Log Pages: Not Supported 00:20:20.361 Persistent Event Log Pages: Not Supported 00:20:20.361 Supported Log Pages Log Page: May Support 00:20:20.361 Commands Supported & Effects Log Page: Not Supported 00:20:20.361 Feature Identifiers & Effects Log Page:May Support 00:20:20.361 NVMe-MI Commands & Effects Log Page: May Support 00:20:20.361 Data Area 4 for Telemetry Log: Not Supported 00:20:20.361 Error Log Page Entries Supported: 128 00:20:20.361 Keep Alive: Supported 00:20:20.361 Keep Alive Granularity: 1000 ms 00:20:20.361 00:20:20.361 NVM Command Set Attributes 00:20:20.361 ========================== 00:20:20.361 Submission Queue Entry Size 00:20:20.361 Max: 64 00:20:20.361 Min: 64 00:20:20.361 Completion Queue Entry Size 00:20:20.361 Max: 16 00:20:20.361 Min: 16 00:20:20.361 Number of Namespaces: 1024 00:20:20.361 Compare Command: Not Supported 00:20:20.361 Write Uncorrectable Command: Not Supported 00:20:20.361 Dataset Management Command: Supported 00:20:20.361 Write Zeroes Command: Supported 00:20:20.361 Set Features Save Field: Not Supported 00:20:20.361 Reservations: Not Supported 00:20:20.361 Timestamp: Not Supported 00:20:20.361 Copy: Not Supported 00:20:20.361 Volatile Write Cache: Present 00:20:20.361 Atomic Write Unit (Normal): 1 00:20:20.361 Atomic Write Unit (PFail): 1 00:20:20.361 Atomic Compare & Write Unit: 1 00:20:20.361 Fused Compare & Write: Not Supported 00:20:20.361 Scatter-Gather List 00:20:20.361 SGL Command Set: Supported 00:20:20.361 SGL Keyed: Not Supported 00:20:20.361 SGL Bit Bucket Descriptor: Not Supported 00:20:20.361 SGL Metadata Pointer: Not Supported 00:20:20.361 Oversized SGL: Not Supported 00:20:20.361 SGL Metadata Address: Not Supported 00:20:20.361 SGL Offset: Supported 00:20:20.361 Transport SGL Data Block: Not Supported 00:20:20.361 Replay Protected Memory Block: Not Supported 00:20:20.361 00:20:20.361 Firmware Slot Information 00:20:20.361 ========================= 00:20:20.361 Active slot: 0 00:20:20.361 00:20:20.361 Asymmetric Namespace Access 00:20:20.361 =========================== 00:20:20.361 Change Count : 0 00:20:20.361 Number of ANA Group Descriptors : 1 00:20:20.361 ANA Group Descriptor : 0 00:20:20.361 ANA Group ID : 1 00:20:20.361 Number of NSID Values : 1 00:20:20.361 Change Count : 0 00:20:20.361 ANA State : 1 00:20:20.361 Namespace Identifier : 1 00:20:20.361 00:20:20.361 Commands Supported and Effects 00:20:20.361 ============================== 00:20:20.361 Admin Commands 00:20:20.361 -------------- 00:20:20.361 Get Log Page (02h): Supported 00:20:20.361 Identify (06h): Supported 00:20:20.361 Abort (08h): Supported 00:20:20.361 Set Features (09h): Supported 00:20:20.361 Get Features (0Ah): Supported 00:20:20.361 Asynchronous Event Request (0Ch): Supported 00:20:20.361 Keep Alive (18h): Supported 00:20:20.361 I/O Commands 00:20:20.361 ------------ 00:20:20.361 Flush (00h): Supported 00:20:20.361 Write (01h): Supported LBA-Change 00:20:20.361 Read (02h): Supported 00:20:20.361 Write Zeroes (08h): Supported LBA-Change 00:20:20.361 Dataset Management (09h): Supported 00:20:20.361 00:20:20.361 Error Log 00:20:20.361 ========= 00:20:20.361 Entry: 0 00:20:20.361 Error Count: 0x3 00:20:20.361 Submission Queue Id: 0x0 00:20:20.361 Command Id: 0x5 00:20:20.361 Phase Bit: 0 00:20:20.361 Status Code: 0x2 00:20:20.361 Status Code Type: 0x0 00:20:20.361 Do Not Retry: 1 00:20:20.361 Error Location: 0x28 00:20:20.361 LBA: 0x0 00:20:20.361 Namespace: 0x0 00:20:20.361 Vendor Log Page: 0x0 00:20:20.361 ----------- 00:20:20.361 Entry: 1 00:20:20.361 Error Count: 0x2 00:20:20.361 Submission Queue Id: 0x0 00:20:20.361 Command Id: 0x5 00:20:20.361 Phase Bit: 0 00:20:20.361 Status Code: 0x2 00:20:20.361 Status Code Type: 0x0 00:20:20.361 Do Not Retry: 1 00:20:20.361 Error Location: 0x28 00:20:20.361 LBA: 0x0 00:20:20.361 Namespace: 0x0 00:20:20.361 Vendor Log Page: 0x0 00:20:20.361 ----------- 00:20:20.361 Entry: 2 00:20:20.362 Error Count: 0x1 00:20:20.362 Submission Queue Id: 0x0 00:20:20.362 Command Id: 0x4 00:20:20.362 Phase Bit: 0 00:20:20.362 Status Code: 0x2 00:20:20.362 Status Code Type: 0x0 00:20:20.362 Do Not Retry: 1 00:20:20.362 Error Location: 0x28 00:20:20.362 LBA: 0x0 00:20:20.362 Namespace: 0x0 00:20:20.362 Vendor Log Page: 0x0 00:20:20.362 00:20:20.362 Number of Queues 00:20:20.362 ================ 00:20:20.362 Number of I/O Submission Queues: 128 00:20:20.362 Number of I/O Completion Queues: 128 00:20:20.362 00:20:20.362 ZNS Specific Controller Data 00:20:20.362 ============================ 00:20:20.362 Zone Append Size Limit: 0 00:20:20.362 00:20:20.362 00:20:20.362 Active Namespaces 00:20:20.362 ================= 00:20:20.362 get_feature(0x05) failed 00:20:20.362 Namespace ID:1 00:20:20.362 Command Set Identifier: NVM (00h) 00:20:20.362 Deallocate: Supported 00:20:20.362 Deallocated/Unwritten Error: Not Supported 00:20:20.362 Deallocated Read Value: Unknown 00:20:20.362 Deallocate in Write Zeroes: Not Supported 00:20:20.362 Deallocated Guard Field: 0xFFFF 00:20:20.362 Flush: Supported 00:20:20.362 Reservation: Not Supported 00:20:20.362 Namespace Sharing Capabilities: Multiple Controllers 00:20:20.362 Size (in LBAs): 1953525168 (931GiB) 00:20:20.362 Capacity (in LBAs): 1953525168 (931GiB) 00:20:20.362 Utilization (in LBAs): 1953525168 (931GiB) 00:20:20.362 UUID: 9076d984-d51e-42a6-9a49-1aeeb13bd9ee 00:20:20.362 Thin Provisioning: Not Supported 00:20:20.362 Per-NS Atomic Units: Yes 00:20:20.362 Atomic Boundary Size (Normal): 0 00:20:20.362 Atomic Boundary Size (PFail): 0 00:20:20.362 Atomic Boundary Offset: 0 00:20:20.362 NGUID/EUI64 Never Reused: No 00:20:20.362 ANA group ID: 1 00:20:20.362 Namespace Write Protected: No 00:20:20.362 Number of LBA Formats: 1 00:20:20.362 Current LBA Format: LBA Format #00 00:20:20.362 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:20.362 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.362 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.619 rmmod nvme_tcp 00:20:20.620 rmmod nvme_fabrics 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.620 02:37:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:22.522 02:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:23.896 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:23.896 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:23.896 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:25.272 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:20:25.272 00:20:25.272 real 0m10.016s 00:20:25.272 user 0m2.207s 00:20:25.272 sys 0m3.989s 00:20:25.272 02:37:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:25.272 02:37:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.272 ************************************ 00:20:25.272 END TEST nvmf_identify_kernel_target 00:20:25.272 ************************************ 00:20:25.272 02:37:12 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:25.272 02:37:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:25.272 02:37:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:25.272 02:37:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:25.272 ************************************ 00:20:25.272 START TEST nvmf_auth 00:20:25.272 ************************************ 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:25.272 * Looking for test storage... 00:20:25.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # nvmftestinit 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.272 02:37:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:27.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:27.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:27.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:27.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.801 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:28.060 00:20:28.060 --- 10.0.0.2 ping statistics --- 00:20:28.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.060 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:20:28.060 00:20:28.060 --- 10.0.0.1 ping statistics --- 00:20:28.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.060 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=2382369 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 2382369 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 2382369 ']' 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:28.060 02:37:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=null 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=32 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=d61585100227f2395d0d1e962cc57efb 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GJg 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key d61585100227f2395d0d1e962cc57efb 0 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d61585100227f2395d0d1e962cc57efb 0 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d61585100227f2395d0d1e962cc57efb 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GJg 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GJg 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GJg 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=sha512 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=64 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=c23ec3e0b9a8eefa4569da031f0c53e3f499eee93231ab71c1b3a3ccb7dbd6bc 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7Ly 00:20:28.995 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key c23ec3e0b9a8eefa4569da031f0c53e3f499eee93231ab71c1b3a3ccb7dbd6bc 3 00:20:28.996 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 c23ec3e0b9a8eefa4569da031f0c53e3f499eee93231ab71c1b3a3ccb7dbd6bc 3 00:20:28.996 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.996 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.996 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=c23ec3e0b9a8eefa4569da031f0c53e3f499eee93231ab71c1b3a3ccb7dbd6bc 00:20:28.996 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:20:28.996 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7Ly 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7Ly 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7Ly 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=null 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=48 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=2dee4fb5ad9b959bac65ed250d07e1a8ffc5dfe43d6f7ee6 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LJE 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key 2dee4fb5ad9b959bac65ed250d07e1a8ffc5dfe43d6f7ee6 0 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 2dee4fb5ad9b959bac65ed250d07e1a8ffc5dfe43d6f7ee6 0 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=2dee4fb5ad9b959bac65ed250d07e1a8ffc5dfe43d6f7ee6 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LJE 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LJE 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LJE 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=sha384 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=48 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=977c0f539b075f0696a06219497f25e669814dd25d50fa64 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OkB 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key 977c0f539b075f0696a06219497f25e669814dd25d50fa64 2 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 977c0f539b075f0696a06219497f25e669814dd25d50fa64 2 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=977c0f539b075f0696a06219497f25e669814dd25d50fa64 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OkB 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OkB 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OkB 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:29.254 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=sha256 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=32 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=0c1efe477567d7554580ec97b6d7c8d5 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kAR 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key 0c1efe477567d7554580ec97b6d7c8d5 1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 0c1efe477567d7554580ec97b6d7c8d5 1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=0c1efe477567d7554580ec97b6d7c8d5 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kAR 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kAR 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kAR 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=sha256 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=32 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=a8e6bdd49e310d6673dae186e212ecfa 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.M3w 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key a8e6bdd49e310d6673dae186e212ecfa 1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 a8e6bdd49e310d6673dae186e212ecfa 1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=a8e6bdd49e310d6673dae186e212ecfa 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.M3w 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.M3w 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.M3w 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=sha384 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=48 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=5c8d6a78a28d6bc4057247c144374ddef1efdfeaa0ac28c1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.STU 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key 5c8d6a78a28d6bc4057247c144374ddef1efdfeaa0ac28c1 2 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 5c8d6a78a28d6bc4057247c144374ddef1efdfeaa0ac28c1 2 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=5c8d6a78a28d6bc4057247c144374ddef1efdfeaa0ac28c1 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:20:29.255 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.STU 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.STU 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.STU 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=null 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=32 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=2239cfd4c79c519e4313c7f6e8b361c7 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kvP 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key 2239cfd4c79c519e4313c7f6e8b361c7 0 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 2239cfd4c79c519e4313c7f6e8b361c7 0 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=2239cfd4c79c519e4313c7f6e8b361c7 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kvP 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kvP 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kvP 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@723 -- # local digest len file key 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@724 -- # local -A digests 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # digest=sha512 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@726 -- # len=64 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@727 -- # key=c32e7cd5426cbdc204e8747c0783ad5f2067f769a93ab97e2e468cda07876f4c 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tyd 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # format_dhchap_key c32e7cd5426cbdc204e8747c0783ad5f2067f769a93ab97e2e468cda07876f4c 3 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 c32e7cd5426cbdc204e8747c0783ad5f2067f769a93ab97e2e468cda07876f4c 3 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=c32e7cd5426cbdc204e8747c0783ad5f2067f769a93ab97e2e468cda07876f4c 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tyd 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tyd 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tyd 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # ckeys[4]= 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- host/auth.sh@79 -- # waitforlisten 2382369 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 2382369 ']' 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:29.513 02:37:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GJg 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7Ly ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Ly 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LJE 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OkB ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OkB 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kAR 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.M3w ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.M3w 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.STU 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kvP ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kvP 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tyd 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@85 -- # nvmet_auth_init 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:29.771 02:37:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:20:31.168 Waiting for block devices as requested 00:20:31.168 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:31.168 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:31.168 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:31.426 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:31.426 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:31.426 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:31.426 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:31.683 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:31.683 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:31.683 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:31.940 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:31.940 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:31.940 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:31.940 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:32.197 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:32.197 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:32.197 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:32.763 02:37:19 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:32.763 No valid GPT data, bailing 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:20:32.763 00:20:32.763 Discovery Log Number of Records 2, Generation counter 2 00:20:32.763 =====Discovery Log Entry 0====== 00:20:32.763 trtype: tcp 00:20:32.763 adrfam: ipv4 00:20:32.763 subtype: current discovery subsystem 00:20:32.763 treq: not specified, sq flow control disable supported 00:20:32.763 portid: 1 00:20:32.763 trsvcid: 4420 00:20:32.763 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:32.763 traddr: 10.0.0.1 00:20:32.763 eflags: none 00:20:32.763 sectype: none 00:20:32.763 =====Discovery Log Entry 1====== 00:20:32.763 trtype: tcp 00:20:32.763 adrfam: ipv4 00:20:32.763 subtype: nvme subsystem 00:20:32.763 treq: not specified, sq flow control disable supported 00:20:32.763 portid: 1 00:20:32.763 trsvcid: 4420 00:20:32.763 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:32.763 traddr: 10.0.0.1 00:20:32.763 eflags: none 00:20:32.763 sectype: none 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # IFS=, 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # IFS=, 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.763 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.022 nvme0n1 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.022 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.023 nvme0n1 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.023 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.281 nvme0n1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.281 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.540 nvme0n1 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.540 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 nvme0n1 00:20:33.798 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.798 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.798 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.798 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 02:37:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.798 02:37:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 nvme0n1 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.798 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.057 nvme0n1 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.057 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.058 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.327 nvme0n1 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.327 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.619 nvme0n1 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.619 02:37:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.877 nvme0n1 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.877 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.878 nvme0n1 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.878 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:35.135 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.136 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.394 nvme0n1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.394 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.652 nvme0n1 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.652 02:37:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.910 nvme0n1 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.910 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.911 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.169 nvme0n1 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.169 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.426 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.427 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.685 nvme0n1 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.685 02:37:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.250 nvme0n1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.250 02:37:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 nvme0n1 00:20:37.816 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.816 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.816 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.816 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.817 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.380 nvme0n1 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.380 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.381 02:37:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.947 nvme0n1 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.947 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.512 nvme0n1 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.512 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.513 02:37:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.447 nvme0n1 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.447 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.448 02:37:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.379 nvme0n1 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:41.379 02:37:28 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.380 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.380 02:37:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.311 nvme0n1 00:20:42.311 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.311 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.311 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.311 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.311 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.311 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.568 02:37:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.499 nvme0n1 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha256 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:43.499 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.500 02:37:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 nvme0n1 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 nvme0n1 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.431 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.432 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.689 nvme0n1 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.689 02:37:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.689 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.690 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 nvme0n1 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 nvme0n1 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 nvme0n1 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.204 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.205 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.462 nvme0n1 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:45.462 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.463 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.721 nvme0n1 00:20:45.721 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.721 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.721 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.721 02:37:32 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.721 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.721 02:37:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.721 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.979 nvme0n1 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.979 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.236 nvme0n1 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.236 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.493 nvme0n1 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:46.493 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.494 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.751 nvme0n1 00:20:46.751 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.751 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.751 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.751 02:37:33 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.751 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.751 02:37:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.751 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.009 nvme0n1 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.009 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.266 nvme0n1 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.266 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.524 nvme0n1 00:20:47.524 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.524 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.524 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.524 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.524 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.524 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.801 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.801 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.801 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.801 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.802 02:37:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.111 nvme0n1 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:48.111 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.112 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.676 nvme0n1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.676 02:37:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.933 nvme0n1 00:20:48.933 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.933 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.933 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.933 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.933 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.191 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.755 nvme0n1 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.755 02:37:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.321 nvme0n1 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.321 02:37:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.887 nvme0n1 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.887 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 nvme0n1 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.820 02:37:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:51.820 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:51.821 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.821 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.821 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.753 nvme0n1 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.753 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.754 02:37:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.686 nvme0n1 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.686 02:37:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.618 nvme0n1 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha384 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.618 02:37:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.553 nvme0n1 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.553 02:37:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.812 nvme0n1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.812 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 nvme0n1 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 nvme0n1 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.073 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:56.332 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.333 nvme0n1 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.333 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.591 nvme0n1 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.591 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.592 02:37:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.849 nvme0n1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.849 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.107 nvme0n1 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.107 nvme0n1 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.107 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.366 nvme0n1 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.366 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.624 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.625 nvme0n1 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.625 02:37:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.625 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.883 nvme0n1 00:20:57.883 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.883 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.883 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.883 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.883 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.883 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.141 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.400 nvme0n1 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.400 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.659 nvme0n1 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.659 02:37:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.918 nvme0n1 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.918 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.176 nvme0n1 00:20:59.176 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.176 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.176 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.176 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.176 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.176 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.435 02:37:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.693 nvme0n1 00:20:59.693 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.693 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.693 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.693 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.693 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.693 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:59.951 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:59.952 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.952 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.952 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.517 nvme0n1 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.517 02:37:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.115 nvme0n1 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:01.115 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.116 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.680 nvme0n1 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.680 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.681 02:37:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.247 nvme0n1 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDYxNTg1MTAwMjI3ZjIzOTVkMGQxZTk2MmNjNTdlZmIxbdBQ: 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzIzZWMzZTBiOWE4ZWVmYTQ1NjlkYTAzMWYwYzUzZTNmNDk5ZWVlOTMyMzFhYjcxYzFiM2EzY2NiN2RiZDZiY8RbC+c=: 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=0 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.247 02:37:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.181 nvme0n1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.181 02:37:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.113 nvme0n1 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:04.113 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MGMxZWZlNDc3NTY3ZDc1NTQ1ODBlYzk3YjZkN2M4ZDVLx/Of: 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: ]] 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:YThlNmJkZDQ5ZTMxMGQ2NjczZGFlMTg2ZTIxMmVjZmH+j0oD: 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=2 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.114 02:37:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.047 nvme0n1 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NWM4ZDZhNzhhMjhkNmJjNDA1NzI0N2MxNDQzNzRkZGVmMWVmZGZlYWEwYWMyOGMxjsAEew==: 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjIzOWNmZDRjNzljNTE5ZTQzMTNjN2Y2ZThiMzYxYzeiXzJj: 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=3 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.048 02:37:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.421 nvme0n1 00:21:06.421 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.421 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:YzMyZTdjZDU0MjZjYmRjMjA0ZTg3NDdjMDc4M2FkNWYyMDY3Zjc2OWE5M2FiOTdlMmU0NjhjZGEwNzg3NmY0Y0WQ6UI=: 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # digest=sha512 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@57 -- # keyid=4 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.422 02:37:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.988 nvme0n1 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.988 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MmRlZTRmYjVhZDliOTU5YmFjNjVlZDI1MGQwN2UxYThmZmM1ZGZlNDNkNmY3ZWU2g1vPHw==: 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: ]] 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:OTc3YzBmNTM5YjA3NWYwNjk2YTA2MjE5NDk3ZjI1ZTY2OTgxNGRkMjVkNTBmYTY0DNFFXA==: 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@112 -- # get_main_ns_ip 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.247 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.247 request: 00:21:07.247 { 00:21:07.247 "name": "nvme0", 00:21:07.247 "trtype": "tcp", 00:21:07.247 "traddr": "10.0.0.1", 00:21:07.247 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:07.247 "adrfam": "ipv4", 00:21:07.247 "trsvcid": "4420", 00:21:07.247 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:07.247 "method": "bdev_nvme_attach_controller", 00:21:07.247 "req_id": 1 00:21:07.247 } 00:21:07.247 Got JSON-RPC error response 00:21:07.247 response: 00:21:07.247 { 00:21:07.247 "code": -32602, 00:21:07.247 "message": "Invalid parameters" 00:21:07.247 } 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # jq length 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # get_main_ns_ip 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.248 request: 00:21:07.248 { 00:21:07.248 "name": "nvme0", 00:21:07.248 "trtype": "tcp", 00:21:07.248 "traddr": "10.0.0.1", 00:21:07.248 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:07.248 "adrfam": "ipv4", 00:21:07.248 "trsvcid": "4420", 00:21:07.248 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:07.248 "dhchap_key": "key2", 00:21:07.248 "method": "bdev_nvme_attach_controller", 00:21:07.248 "req_id": 1 00:21:07.248 } 00:21:07.248 Got JSON-RPC error response 00:21:07.248 response: 00:21:07.248 { 00:21:07.248 "code": -32602, 00:21:07.248 "message": "Invalid parameters" 00:21:07.248 } 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@120 -- # jq length 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # get_main_ns_ip 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@741 -- # local ip 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:07.248 request: 00:21:07.248 { 00:21:07.248 "name": "nvme0", 00:21:07.248 "trtype": "tcp", 00:21:07.248 "traddr": "10.0.0.1", 00:21:07.248 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:07.248 "adrfam": "ipv4", 00:21:07.248 "trsvcid": "4420", 00:21:07.248 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:07.248 "dhchap_key": "key1", 00:21:07.248 "dhchap_ctrlr_key": "ckey2", 00:21:07.248 "method": "bdev_nvme_attach_controller", 00:21:07.248 "req_id": 1 00:21:07.248 } 00:21:07.248 Got JSON-RPC error response 00:21:07.248 response: 00:21:07.248 { 00:21:07.248 "code": -32602, 00:21:07.248 "message": "Invalid parameters" 00:21:07.248 } 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@128 -- # cleanup 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.248 rmmod nvme_tcp 00:21:07.248 rmmod nvme_fabrics 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 2382369 ']' 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 2382369 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 2382369 ']' 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 2382369 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:21:07.248 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.507 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2382369 00:21:07.507 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:07.507 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:07.507 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2382369' 00:21:07.507 killing process with pid 2382369 00:21:07.507 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 2382369 00:21:07.507 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 2382369 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.765 02:37:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.666 02:37:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:09.666 02:37:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:11.041 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:21:11.041 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:21:11.041 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:21:11.975 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:21:12.233 02:37:59 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GJg /tmp/spdk.key-null.LJE /tmp/spdk.key-sha256.kAR /tmp/spdk.key-sha384.STU /tmp/spdk.key-sha512.tyd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:21:12.233 02:37:59 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:13.606 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:13.606 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:13.606 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:13.606 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:13.606 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:13.606 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:13.606 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:13.606 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:13.606 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:13.606 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:13.606 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:13.606 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:13.606 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:13.606 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:13.606 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:13.606 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:13.606 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:13.606 00:21:13.606 real 0m48.311s 00:21:13.606 user 0m45.644s 00:21:13.606 sys 0m6.450s 00:21:13.606 02:38:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.606 02:38:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:13.606 ************************************ 00:21:13.606 END TEST nvmf_auth 00:21:13.606 ************************************ 00:21:13.606 02:38:00 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:21:13.606 02:38:00 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:13.606 02:38:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:13.606 02:38:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:13.606 02:38:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.606 ************************************ 00:21:13.606 START TEST nvmf_digest 00:21:13.606 ************************************ 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:13.606 * Looking for test storage... 00:21:13.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.606 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.607 02:38:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:16.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:16.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:16.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:16.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:16.136 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:21:16.137 00:21:16.137 --- 10.0.0.2 ping statistics --- 00:21:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.137 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:16.137 00:21:16.137 --- 10.0.0.1 ping statistics --- 00:21:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.137 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:16.137 ************************************ 00:21:16.137 START TEST nvmf_digest_clean 00:21:16.137 ************************************ 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2392252 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2392252 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2392252 ']' 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:16.137 02:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:16.393 [2024-05-15 02:38:03.555349] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:16.393 [2024-05-15 02:38:03.555445] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.393 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.393 [2024-05-15 02:38:03.638030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.393 [2024-05-15 02:38:03.756799] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.393 [2024-05-15 02:38:03.756873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.393 [2024-05-15 02:38:03.756890] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.393 [2024-05-15 02:38:03.756911] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.393 [2024-05-15 02:38:03.756924] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.393 [2024-05-15 02:38:03.756983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.361 null0 00:21:17.361 [2024-05-15 02:38:04.649984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.361 [2024-05-15 02:38:04.673960] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:17.361 [2024-05-15 02:38:04.674228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2392405 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2392405 /var/tmp/bperf.sock 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2392405 ']' 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.361 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.362 02:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.362 [2024-05-15 02:38:04.722694] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:17.362 [2024-05-15 02:38:04.722768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392405 ] 00:21:17.619 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.619 [2024-05-15 02:38:04.802447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.619 [2024-05-15 02:38:04.923611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.553 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.553 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:18.553 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:18.553 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:18.553 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:18.811 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.811 02:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.068 nvme0n1 00:21:19.068 02:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:19.068 02:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.068 Running I/O for 2 seconds... 00:21:21.595 00:21:21.595 Latency(us) 00:21:21.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.595 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:21.595 nvme0n1 : 2.01 19640.67 76.72 0.00 0.00 6506.90 3179.71 12718.84 00:21:21.595 =================================================================================================================== 00:21:21.595 Total : 19640.67 76.72 0.00 0.00 6506.90 3179.71 12718.84 00:21:21.595 0 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:21.595 | select(.opcode=="crc32c") 00:21:21.595 | "\(.module_name) \(.executed)"' 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2392405 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2392405 ']' 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2392405 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2392405 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2392405' 00:21:21.595 killing process with pid 2392405 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2392405 00:21:21.595 Received shutdown signal, test time was about 2.000000 seconds 00:21:21.595 00:21:21.595 Latency(us) 00:21:21.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.595 =================================================================================================================== 00:21:21.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.595 02:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2392405 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2392943 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2392943 /var/tmp/bperf.sock 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2392943 ']' 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:21.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:21.854 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:21.854 [2024-05-15 02:38:09.064358] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:21.854 [2024-05-15 02:38:09.064449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392943 ] 00:21:21.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:21.854 Zero copy mechanism will not be used. 00:21:21.854 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.854 [2024-05-15 02:38:09.140452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.854 [2024-05-15 02:38:09.252415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.112 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:22.112 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:22.112 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:22.112 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:22.112 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:22.370 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.370 02:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.936 nvme0n1 00:21:22.936 02:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:22.936 02:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:22.936 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.936 Zero copy mechanism will not be used. 00:21:22.936 Running I/O for 2 seconds... 00:21:24.835 00:21:24.835 Latency(us) 00:21:24.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.835 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:24.835 nvme0n1 : 2.01 2145.93 268.24 0.00 0.00 7450.23 6699.24 11408.12 00:21:24.835 =================================================================================================================== 00:21:24.835 Total : 2145.93 268.24 0.00 0.00 7450.23 6699.24 11408.12 00:21:24.835 0 00:21:25.093 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:25.093 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:25.093 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:25.093 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:25.093 | select(.opcode=="crc32c") 00:21:25.093 | "\(.module_name) \(.executed)"' 00:21:25.093 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2392943 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2392943 ']' 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2392943 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2392943 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2392943' 00:21:25.351 killing process with pid 2392943 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2392943 00:21:25.351 Received shutdown signal, test time was about 2.000000 seconds 00:21:25.351 00:21:25.351 Latency(us) 00:21:25.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.351 =================================================================================================================== 00:21:25.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.351 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2392943 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2393353 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2393353 /var/tmp/bperf.sock 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2393353 ']' 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:25.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:25.609 02:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:25.609 [2024-05-15 02:38:12.862963] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:25.609 [2024-05-15 02:38:12.863057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393353 ] 00:21:25.609 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.609 [2024-05-15 02:38:12.935126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.868 [2024-05-15 02:38:13.051660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.868 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:25.868 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:25.868 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:25.868 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:25.868 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:26.126 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.126 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.383 nvme0n1 00:21:26.383 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:26.383 02:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:26.641 Running I/O for 2 seconds... 00:21:28.543 00:21:28.543 Latency(us) 00:21:28.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.543 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:28.543 nvme0n1 : 2.01 18815.11 73.50 0.00 0.00 6786.07 3058.35 9951.76 00:21:28.543 =================================================================================================================== 00:21:28.543 Total : 18815.11 73.50 0.00 0.00 6786.07 3058.35 9951.76 00:21:28.543 0 00:21:28.543 02:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:28.543 02:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:28.543 02:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:28.543 02:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:28.543 02:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:28.543 | select(.opcode=="crc32c") 00:21:28.543 | "\(.module_name) \(.executed)"' 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2393353 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2393353 ']' 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2393353 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2393353 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2393353' 00:21:28.802 killing process with pid 2393353 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2393353 00:21:28.802 Received shutdown signal, test time was about 2.000000 seconds 00:21:28.802 00:21:28.802 Latency(us) 00:21:28.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.802 =================================================================================================================== 00:21:28.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.802 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2393353 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2393835 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2393835 /var/tmp/bperf.sock 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:29.061 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2393835 ']' 00:21:29.319 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:29.319 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.319 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:29.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:29.319 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.319 02:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:29.319 [2024-05-15 02:38:16.516671] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:29.319 [2024-05-15 02:38:16.516766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393835 ] 00:21:29.319 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:29.319 Zero copy mechanism will not be used. 00:21:29.319 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.319 [2024-05-15 02:38:16.591053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.319 [2024-05-15 02:38:16.706583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.252 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:30.252 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:30.252 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:30.252 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:30.252 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:30.510 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.510 02:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.768 nvme0n1 00:21:30.768 02:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:30.768 02:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:31.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.026 Zero copy mechanism will not be used. 00:21:31.026 Running I/O for 2 seconds... 00:21:32.955 00:21:32.955 Latency(us) 00:21:32.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.955 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:32.955 nvme0n1 : 2.01 1440.70 180.09 0.00 0.00 11068.89 8204.14 19709.35 00:21:32.955 =================================================================================================================== 00:21:32.955 Total : 1440.70 180.09 0.00 0.00 11068.89 8204.14 19709.35 00:21:32.955 0 00:21:32.955 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:32.955 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:32.955 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:32.955 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:32.955 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:32.955 | select(.opcode=="crc32c") 00:21:32.955 | "\(.module_name) \(.executed)"' 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2393835 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2393835 ']' 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2393835 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2393835 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2393835' 00:21:33.214 killing process with pid 2393835 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2393835 00:21:33.214 Received shutdown signal, test time was about 2.000000 seconds 00:21:33.214 00:21:33.214 Latency(us) 00:21:33.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.214 =================================================================================================================== 00:21:33.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.214 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2393835 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2392252 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2392252 ']' 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2392252 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2392252 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2392252' 00:21:33.472 killing process with pid 2392252 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2392252 00:21:33.472 [2024-05-15 02:38:20.811073] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:33.472 02:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2392252 00:21:33.730 00:21:33.730 real 0m17.575s 00:21:33.730 user 0m35.087s 00:21:33.730 sys 0m3.920s 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:33.730 ************************************ 00:21:33.730 END TEST nvmf_digest_clean 00:21:33.730 ************************************ 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:33.730 ************************************ 00:21:33.730 START TEST nvmf_digest_error 00:21:33.730 ************************************ 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2394448 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2394448 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2394448 ']' 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:33.730 02:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.988 [2024-05-15 02:38:21.182811] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:33.988 [2024-05-15 02:38:21.182896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.988 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.988 [2024-05-15 02:38:21.262706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.988 [2024-05-15 02:38:21.376730] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.988 [2024-05-15 02:38:21.376803] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.988 [2024-05-15 02:38:21.376819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.988 [2024-05-15 02:38:21.376832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.988 [2024-05-15 02:38:21.376843] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.988 [2024-05-15 02:38:21.376882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.921 [2024-05-15 02:38:22.183419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.921 null0 00:21:34.921 [2024-05-15 02:38:22.304430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.921 [2024-05-15 02:38:22.328400] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:34.921 [2024-05-15 02:38:22.328674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:34.921 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2394603 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2394603 /var/tmp/bperf.sock 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2394603 ']' 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:34.922 02:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.180 [2024-05-15 02:38:22.372960] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:35.180 [2024-05-15 02:38:22.373051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394603 ] 00:21:35.180 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.180 [2024-05-15 02:38:22.448435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.180 [2024-05-15 02:38:22.564679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.113 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:36.113 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:36.113 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.113 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.371 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:36.371 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.371 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.371 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.371 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.371 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.629 nvme0n1 00:21:36.629 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:36.629 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.629 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.629 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.629 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:36.629 02:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:36.888 Running I/O for 2 seconds... 00:21:36.888 [2024-05-15 02:38:24.099761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.099811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.099831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.113478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.113510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.113527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.126201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.126235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.126253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.139259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.139290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.139318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.152951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.152982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.153000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.164115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.164146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.164164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.178067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.178098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.178115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.189301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.189331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.189348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.201683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.201727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.201743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.214661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.214691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.214708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.227399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.227428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.227444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.240167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.240198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.240216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.252746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.252781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.252799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.265049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.265080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.265097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.277373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.277404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.277421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.888 [2024-05-15 02:38:24.290435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:36.888 [2024-05-15 02:38:24.290465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.888 [2024-05-15 02:38:24.290482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.302519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.302553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.302571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.314687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.314718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.314734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.328437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.328468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.328484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.341018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.341050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.341068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.352354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.352385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.352402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.366856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.366901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.366919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.147 [2024-05-15 02:38:24.377665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.147 [2024-05-15 02:38:24.377695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.147 [2024-05-15 02:38:24.377712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.391417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.391448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.391465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.404116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.404148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.404166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.416342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.416372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.416389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.429336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.429367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.429388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.440766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.440797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.440814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.453976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.454011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.454029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.465504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.465533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.465556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.478563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.478593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.478609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.491420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.491448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.491464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.504904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.504957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.516290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.516334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.516352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.528384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.528412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.528428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.541674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.541703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.541719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.148 [2024-05-15 02:38:24.553734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.148 [2024-05-15 02:38:24.553765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.148 [2024-05-15 02:38:24.553781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.568729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.568779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.579724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.579761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.579778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.593798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.593828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.593845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.604890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.604944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.604964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.618481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.618516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.618534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.630419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.630450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.630467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.644029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.644067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.654657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.654689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.654706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.669114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.669161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.669179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.682077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.682109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.682126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.693740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.693769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.693786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.706198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.706229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-05-15 02:38:24.706261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.407 [2024-05-15 02:38:24.719056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.407 [2024-05-15 02:38:24.719087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.719105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.731755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.731786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.731802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.743903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.743958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.743976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.756630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.756659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.756675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.770209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.770255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.770272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.781086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.781118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.781135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.794299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.794329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.794352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.806662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.806692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.806723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.408 [2024-05-15 02:38:24.819891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.408 [2024-05-15 02:38:24.819924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-05-15 02:38:24.819953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.833594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.833631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.833651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.846584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.846619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.846638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.860168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.860214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.860230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.874909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.874952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.874988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.886617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.886650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.886669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.900325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.900359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.900378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.914427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.914461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.914481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.928122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.928155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.928173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.942070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.942100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.942117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.955103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.955133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.955151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.970578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.970612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.982769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.982804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.982823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:24.998012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:24.998046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:24.998064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:25.010260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:25.010293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:25.010312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:25.025776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:25.025811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:25.025836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:25.037951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:25.037997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:25.038014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:25.052044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:25.052074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:25.052092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:25.065144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:25.065189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:25.065207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.667 [2024-05-15 02:38:25.079162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.667 [2024-05-15 02:38:25.079195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.667 [2024-05-15 02:38:25.079228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.925 [2024-05-15 02:38:25.094674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.094712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.094732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.106397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.106433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.106453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.121195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.121246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.121265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.132483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.132517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.132537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.148127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.148164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.148182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.159941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.159999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.160020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.174211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.174259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.174278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.188685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.188719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.188738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.202516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.202569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.215158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.215202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.215220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.229201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.229251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.229270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.242533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.242568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.242588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.257054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.257086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.257102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.270625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.270664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.270684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.284410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.284443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.284462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.295541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.295575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.295594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.309823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.309858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.309877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.324299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.324347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.324366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.926 [2024-05-15 02:38:25.337753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:37.926 [2024-05-15 02:38:25.337790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.926 [2024-05-15 02:38:25.337821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.352091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.352124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.352143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.363947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.363995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.364013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.379504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.379539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.379564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.392185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.392216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.392252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.406547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.406582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.406606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.419834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.419869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.419888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.433276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.433311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.433331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.446853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.446887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.446906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.461117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.461149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.461166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.472220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.472270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.472291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.487730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.487766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.487786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.502555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.502595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.502615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.515791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.515827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.515846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.528900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.528943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.528979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.541198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.541246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.541265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.555861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.555916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.185 [2024-05-15 02:38:25.569406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.185 [2024-05-15 02:38:25.569441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.185 [2024-05-15 02:38:25.569460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.186 [2024-05-15 02:38:25.582342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.186 [2024-05-15 02:38:25.582377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.186 [2024-05-15 02:38:25.582396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.186 [2024-05-15 02:38:25.595783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.186 [2024-05-15 02:38:25.595820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.186 [2024-05-15 02:38:25.595840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.609959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.610015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.610049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.623642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.623676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.623695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.637923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.637966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.638010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.650958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.651010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.651030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.663928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.663983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.664000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.676939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.676986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.677002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.692372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.692407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.692426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.705119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.705150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.705171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.717693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.717727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.717745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.731627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.731662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.731688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.744599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.744633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.744653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.758857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.758891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.758911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.773137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.773167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.785012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.785043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.785059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.799638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.799673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.799692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.813493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.813528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.444 [2024-05-15 02:38:25.813547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.444 [2024-05-15 02:38:25.826807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.444 [2024-05-15 02:38:25.826841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.445 [2024-05-15 02:38:25.826859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.445 [2024-05-15 02:38:25.840327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.445 [2024-05-15 02:38:25.840361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.445 [2024-05-15 02:38:25.840380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.445 [2024-05-15 02:38:25.855061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.445 [2024-05-15 02:38:25.855095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.445 [2024-05-15 02:38:25.855113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.867898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.867950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.867969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.883660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.883695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.883714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.895156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.895201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.895217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.910160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.910190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.910207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.923514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.923548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.923567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.937491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.937524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.937543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.949776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.949809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.949828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.965517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.965552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.965577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.979096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.979126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.979142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:25.993065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:25.993096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:25.993113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:26.005743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:26.005778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:26.005798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:26.020139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:26.020169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:26.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:26.032073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:26.032102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:26.032118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:26.047850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.703 [2024-05-15 02:38:26.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.703 [2024-05-15 02:38:26.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.703 [2024-05-15 02:38:26.061500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.704 [2024-05-15 02:38:26.061534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.704 [2024-05-15 02:38:26.061553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.704 [2024-05-15 02:38:26.074263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.704 [2024-05-15 02:38:26.074296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.704 [2024-05-15 02:38:26.074315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.704 [2024-05-15 02:38:26.087505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb3950) 00:21:38.704 [2024-05-15 02:38:26.087544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.704 [2024-05-15 02:38:26.087564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.704 00:21:38.704 Latency(us) 00:21:38.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:38.704 nvme0n1 : 2.00 19242.17 75.16 0.00 0.00 6642.75 2949.12 18738.44 00:21:38.704 =================================================================================================================== 00:21:38.704 Total : 19242.17 75.16 0.00 0.00 6642.75 2949.12 18738.44 00:21:38.704 0 00:21:38.704 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:38.704 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:38.704 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:38.704 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:38.704 | .driver_specific 00:21:38.704 | .nvme_error 00:21:38.704 | .status_code 00:21:38.704 | .command_transient_transport_error' 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2394603 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2394603 ']' 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2394603 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2394603 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2394603' 00:21:38.962 killing process with pid 2394603 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2394603 00:21:38.962 Received shutdown signal, test time was about 2.000000 seconds 00:21:38.962 00:21:38.962 Latency(us) 00:21:38.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.962 =================================================================================================================== 00:21:38.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.962 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2394603 00:21:39.220 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:39.220 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2395034 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2395034 /var/tmp/bperf.sock 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2395034 ']' 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:39.478 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.479 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:39.479 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.479 [2024-05-15 02:38:26.676423] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:39.479 [2024-05-15 02:38:26.676492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395034 ] 00:21:39.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.479 Zero copy mechanism will not be used. 00:21:39.479 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.479 [2024-05-15 02:38:26.750286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.479 [2024-05-15 02:38:26.866407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.736 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:39.736 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:39.736 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:39.736 02:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:39.994 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:39.994 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.994 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.994 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.994 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.994 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.252 nvme0n1 00:21:40.252 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:40.252 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.252 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.252 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.252 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:40.252 02:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.513 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.513 Zero copy mechanism will not be used. 00:21:40.513 Running I/O for 2 seconds... 00:21:40.513 [2024-05-15 02:38:27.704668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.704741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.704766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.724556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.724592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.724612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.747159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.747196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.747217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.765674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.765716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.765744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.777185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.777216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.777261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.797480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.797517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.797537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.814921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.814984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.815004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.836323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.836359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.836382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.849386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.849421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.849440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.868945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.868979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.884357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.884392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.884411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.901709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.901745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.901766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.513 [2024-05-15 02:38:27.919431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.513 [2024-05-15 02:38:27.919466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.513 [2024-05-15 02:38:27.919486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:27.929545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:27.929582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:27.929602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:27.947137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:27.947171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:27.947190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:27.966037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:27.966069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:27.966088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:27.985022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:27.985054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:27.985071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.004196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.004268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.022030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.022063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.022081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.040151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.040182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.040199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.056177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.056209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.056227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.073845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.073881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.073901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.091882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.091917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.091947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.110180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.110212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.110230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.127675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.127712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.127732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.145156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.145187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.145207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.162152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.162183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.162222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.773 [2024-05-15 02:38:28.175403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:40.773 [2024-05-15 02:38:28.175438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.773 [2024-05-15 02:38:28.175458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.193428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.193465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.193485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.210503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.210539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.210559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.229145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.229177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.229196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.246604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.246639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.264112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.264144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.264162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.281200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.281250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.281268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.298176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.298221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.298258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.315188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.315235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.315254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.334039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.334072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.334090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.344466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.344501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.344520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.360626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.360661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.360681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.378309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.378345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.378365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.395949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.395998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.396016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.412951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.412997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.413013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.032 [2024-05-15 02:38:28.432025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.032 [2024-05-15 02:38:28.432055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.032 [2024-05-15 02:38:28.432072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.451638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.451681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.451702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.469717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.469752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.469772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.481092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.481122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.481139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.498399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.498434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.498454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.515706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.515741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.533547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.533582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.533601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.551448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.551483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.551503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.568337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.568367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.568403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.585547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.585583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.585603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.603596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.603632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.621525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.291 [2024-05-15 02:38:28.621561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.291 [2024-05-15 02:38:28.621580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.291 [2024-05-15 02:38:28.638853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.292 [2024-05-15 02:38:28.638889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.292 [2024-05-15 02:38:28.638908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.292 [2024-05-15 02:38:28.656604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.292 [2024-05-15 02:38:28.656639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.292 [2024-05-15 02:38:28.656659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.292 [2024-05-15 02:38:28.675374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.292 [2024-05-15 02:38:28.675409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.292 [2024-05-15 02:38:28.675429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.292 [2024-05-15 02:38:28.692760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.292 [2024-05-15 02:38:28.692796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.292 [2024-05-15 02:38:28.692815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.711404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.711443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.711464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.729137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.729185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.729203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.747217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.747249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.747293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.763560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.763596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.763615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.780914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.780958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.780991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.799408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.799444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.799463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.816234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.816281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.816301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.835561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.835598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.835618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.853788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.853824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.853844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.870787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.870823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.870843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.889058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.889090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.889109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.906815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.906856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.906877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.920737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.550 [2024-05-15 02:38:28.920770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.550 [2024-05-15 02:38:28.920790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.550 [2024-05-15 02:38:28.935739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.551 [2024-05-15 02:38:28.935775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.551 [2024-05-15 02:38:28.935794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.551 [2024-05-15 02:38:28.953133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.551 [2024-05-15 02:38:28.953164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.551 [2024-05-15 02:38:28.953180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:28.971761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:28.971798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:28.971819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:28.989578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:28.989615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:28.989635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.007466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.007503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.025523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.025559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.025579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.043369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.043405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.043424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.061176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.061221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.061238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.078982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.079013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.079046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.096525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.096559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.096579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.113871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.113907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.113927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.131254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.131301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.131317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.148563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.148600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.148619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.166118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.166149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.166166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.183664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.183700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.183720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.201522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.201557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.201584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.809 [2024-05-15 02:38:29.218497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:41.809 [2024-05-15 02:38:29.218532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.809 [2024-05-15 02:38:29.218552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.236649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.236686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.236705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.254668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.254704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.254725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.272737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.272772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.290048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.290079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.290096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.308060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.308095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.308112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.325123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.325169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.325186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.343838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.343875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.343895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.361131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.361163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.361182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.376168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.376202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.376220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.393206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.393255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.393273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.409924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.409962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.409981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.425631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.425661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.425678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.068 [2024-05-15 02:38:29.441848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.068 [2024-05-15 02:38:29.441879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.068 [2024-05-15 02:38:29.441897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.069 [2024-05-15 02:38:29.452208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.069 [2024-05-15 02:38:29.452254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.069 [2024-05-15 02:38:29.452272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.069 [2024-05-15 02:38:29.467105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.069 [2024-05-15 02:38:29.467137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.069 [2024-05-15 02:38:29.467155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.484059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.484110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.484136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.500396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.500430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.500448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.517783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.517814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.517846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.534313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.534346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.534381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.550303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.550332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.550349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.565615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.565653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.565675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.327 [2024-05-15 02:38:29.581620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.327 [2024-05-15 02:38:29.581650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.327 [2024-05-15 02:38:29.581667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.597800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.597832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.597850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.612022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.612053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.612071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.624015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.624071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.638804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.638834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.638850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.655417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.655448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.655480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.671003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.671035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.671052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.328 [2024-05-15 02:38:29.687564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23f4850) 00:21:42.328 [2024-05-15 02:38:29.687596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.328 [2024-05-15 02:38:29.687627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.328 00:21:42.328 Latency(us) 00:21:42.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.328 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:42.328 nvme0n1 : 2.01 1820.57 227.57 0.00 0.00 8784.13 2669.99 23398.78 00:21:42.328 =================================================================================================================== 00:21:42.328 Total : 1820.57 227.57 0.00 0.00 8784.13 2669.99 23398.78 00:21:42.328 0 00:21:42.328 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:42.328 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:42.328 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:42.328 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:42.328 | .driver_specific 00:21:42.328 | .nvme_error 00:21:42.328 | .status_code 00:21:42.328 | .command_transient_transport_error' 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2395034 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2395034 ']' 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2395034 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2395034 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2395034' 00:21:42.586 killing process with pid 2395034 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2395034 00:21:42.586 Received shutdown signal, test time was about 2.000000 seconds 00:21:42.586 00:21:42.586 Latency(us) 00:21:42.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.586 =================================================================================================================== 00:21:42.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.586 02:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2395034 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2395546 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2395546 /var/tmp/bperf.sock 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2395546 ']' 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:43.152 02:38:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.152 [2024-05-15 02:38:30.309585] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:43.152 [2024-05-15 02:38:30.309680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395546 ] 00:21:43.152 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.152 [2024-05-15 02:38:30.383643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.152 [2024-05-15 02:38:30.497658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.085 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:44.085 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:44.085 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.085 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.343 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:44.343 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.343 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.343 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.343 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.343 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.602 nvme0n1 00:21:44.602 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:44.602 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.602 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.602 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.602 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:44.602 02:38:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.602 Running I/O for 2 seconds... 00:21:44.602 [2024-05-15 02:38:32.000799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.602 [2024-05-15 02:38:32.001200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.602 [2024-05-15 02:38:32.001269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-05-15 02:38:32.015480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.602 [2024-05-15 02:38:32.015882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.602 [2024-05-15 02:38:32.015950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.863 [2024-05-15 02:38:32.030178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.863 [2024-05-15 02:38:32.030528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.863 [2024-05-15 02:38:32.030565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.863 [2024-05-15 02:38:32.044550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.863 [2024-05-15 02:38:32.044960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.863 [2024-05-15 02:38:32.045024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.863 [2024-05-15 02:38:32.058886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.059263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.059316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.073180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.073585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.073628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.087422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.087779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.087816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.101389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.101715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.101767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.115576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.116003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.129939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.130286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.130343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.144239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.144597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.144640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.158380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.158762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.158816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.172672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.173020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.173073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.186876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.187270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.187325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.201001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.201332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.201385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.215115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.215476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.215532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.229246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.229588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.229640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.243276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.243627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.243682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.257388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.257769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.257824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.864 [2024-05-15 02:38:32.271758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:44.864 [2024-05-15 02:38:32.272146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.864 [2024-05-15 02:38:32.272183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.285908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.300557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.300951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.301006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.314671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.315072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.315116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.328766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.329165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.329202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.343230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.343579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.343615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.357257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.357567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.357602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.371160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.371520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.371556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.385277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.385633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.385667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.399498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.399849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.399884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.413531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.413907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.413953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.427813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.125 [2024-05-15 02:38:32.428198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.125 [2024-05-15 02:38:32.428235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.125 [2024-05-15 02:38:32.441869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.442244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.442300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.126 [2024-05-15 02:38:32.455785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.456139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.456175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.126 [2024-05-15 02:38:32.469864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.470228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.470277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.126 [2024-05-15 02:38:32.483960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.484339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.484373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.126 [2024-05-15 02:38:32.498066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.498394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.498443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.126 [2024-05-15 02:38:32.512184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.512536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.512571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.126 [2024-05-15 02:38:32.526666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.126 [2024-05-15 02:38:32.527037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.126 [2024-05-15 02:38:32.527074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.540766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.541166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.541205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.554772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.555143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.555180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.569012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.569342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.569370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.583139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.583484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.583534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.597203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.597598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.597634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.611253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.611591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.611641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.625423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.625773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.625801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.639463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.639838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.639872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.653604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.654003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.654032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.667637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.668013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.668064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.681806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.682197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.682248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.696114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.696500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.696536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.710356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.710731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.710766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.724588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.725018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.739100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.739472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.739523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.753916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.754282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.754317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.768111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.768468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.768502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.782284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.782612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.386 [2024-05-15 02:38:32.782661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.386 [2024-05-15 02:38:32.796408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.386 [2024-05-15 02:38:32.796782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.387 [2024-05-15 02:38:32.796832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.645 [2024-05-15 02:38:32.810760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.645 [2024-05-15 02:38:32.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.645 [2024-05-15 02:38:32.811155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.645 [2024-05-15 02:38:32.824865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.645 [2024-05-15 02:38:32.825246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.645 [2024-05-15 02:38:32.825276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.645 [2024-05-15 02:38:32.838943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.839281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.839316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.853038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.853379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.853413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.867200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.867538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.867572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.881377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.881756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.881792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.895498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.895823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.895859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.909596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.909960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.909994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.923630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.923958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.924008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.937820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.938239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.938290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.952006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.952373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.952423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.966260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.966621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.966656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.980657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.981038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.981076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:32.994839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:32.995222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:32.995258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:33.008903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:33.009301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:33.009335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:33.023099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:33.023428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:33.023463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:33.037180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:33.037543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:33.037579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.646 [2024-05-15 02:38:33.051167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.646 [2024-05-15 02:38:33.051538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.646 [2024-05-15 02:38:33.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.065545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.065941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.065994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.079697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.080052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.080087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.093775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.094156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.094193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.107877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.108236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.108271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.121982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.122310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.122360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.136070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.136421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.136471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.150183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.150561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.150597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.164319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.164655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.164704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.178340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.178701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.178731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.192549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.192947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.905 [2024-05-15 02:38:33.193001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.905 [2024-05-15 02:38:33.206766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.905 [2024-05-15 02:38:33.207117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.207154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.221134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.221522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.235272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.235663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.235699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.249498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.249858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.249894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.263623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.264026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.264063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.277923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.278320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.278355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.292252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.292654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.292695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.906 [2024-05-15 02:38:33.306356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:45.906 [2024-05-15 02:38:33.306713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.906 [2024-05-15 02:38:33.306775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.320652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.321017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.321053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.334959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.335316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.335368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.348899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.349305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.349346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.363015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.363371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.363406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.377129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.377483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.377519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.390894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.391262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.391315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.404996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.405329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.405365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.419175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.419526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.419581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.433374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.433754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.433805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.447656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.448046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.448098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.461449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.461831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.461878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.475665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.476062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.476111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.489980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.490385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.490427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.504313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.504659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.504710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.518538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.518925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.518990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.532728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.533117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.533154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.546871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.547244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.547279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.561004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.561355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.561411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.165 [2024-05-15 02:38:33.575137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.165 [2024-05-15 02:38:33.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.165 [2024-05-15 02:38:33.575541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.589388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.589793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.589851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.603618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.604025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.604079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.617890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.618256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.618293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.632002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.632388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.632441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.646012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.646378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.646414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.660320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.660696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.660737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.674462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.674815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.688701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.689062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.689115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.702831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.703188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.716889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.717260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.717297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.731068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.731401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.745335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.745695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.426 [2024-05-15 02:38:33.759524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.426 [2024-05-15 02:38:33.759912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.426 [2024-05-15 02:38:33.759978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.427 [2024-05-15 02:38:33.773588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.427 [2024-05-15 02:38:33.773983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.427 [2024-05-15 02:38:33.774019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.427 [2024-05-15 02:38:33.787755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.427 [2024-05-15 02:38:33.788173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.427 [2024-05-15 02:38:33.788206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.427 [2024-05-15 02:38:33.801966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.427 [2024-05-15 02:38:33.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.427 [2024-05-15 02:38:33.802382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.427 [2024-05-15 02:38:33.816110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.427 [2024-05-15 02:38:33.816508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.427 [2024-05-15 02:38:33.816549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.427 [2024-05-15 02:38:33.830260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.427 [2024-05-15 02:38:33.830618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.427 [2024-05-15 02:38:33.830653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.844507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.844856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.844901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.858507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.858866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.858902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.872553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.872954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.872987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.886769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.887150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.887186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.901043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.901394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.901444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.915363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.915723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.915764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.929437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.929838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.929879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.943681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.944083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.944133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.957950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.958366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.958407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.972227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.972589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.972629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-05-15 02:38:33.986376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3190) with pdu=0x2000190feb58 00:21:46.685 [2024-05-15 02:38:33.986748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-05-15 02:38:33.986781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 00:21:46.685 Latency(us) 00:21:46.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.685 nvme0n1 : 2.01 17915.20 69.98 0.00 0.00 7123.13 6262.33 19418.07 00:21:46.685 =================================================================================================================== 00:21:46.685 Total : 17915.20 69.98 0.00 0.00 7123.13 6262.33 19418.07 00:21:46.685 0 00:21:46.685 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:46.685 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:46.685 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:46.685 | .driver_specific 00:21:46.685 | .nvme_error 00:21:46.685 | .status_code 00:21:46.685 | .command_transient_transport_error' 00:21:46.685 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:46.945 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:21:46.945 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2395546 00:21:46.945 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2395546 ']' 00:21:46.945 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2395546 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2395546 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2395546' 00:21:46.946 killing process with pid 2395546 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2395546 00:21:46.946 Received shutdown signal, test time was about 2.000000 seconds 00:21:46.946 00:21:46.946 Latency(us) 00:21:46.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.946 =================================================================================================================== 00:21:46.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.946 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2395546 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2395972 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2395972 /var/tmp/bperf.sock 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2395972 ']' 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.204 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.204 [2024-05-15 02:38:34.589986] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:47.204 [2024-05-15 02:38:34.590077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395972 ] 00:21:47.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.204 Zero copy mechanism will not be used. 00:21:47.462 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.462 [2024-05-15 02:38:34.672870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.462 [2024-05-15 02:38:34.781303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.721 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.721 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:47.721 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:47.721 02:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:47.721 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:47.721 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.721 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.721 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.721 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:47.721 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.291 nvme0n1 00:21:48.291 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:48.291 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.291 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.291 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.291 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:48.291 02:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.291 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:48.291 Zero copy mechanism will not be used. 00:21:48.291 Running I/O for 2 seconds... 00:21:48.291 [2024-05-15 02:38:35.700177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.291 [2024-05-15 02:38:35.700570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.291 [2024-05-15 02:38:35.700611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.721375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.721997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.722041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.744484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.745077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.745107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.768946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.769528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.769560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.791045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.791531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.791590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.814330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.814891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.814941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.840374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.840938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.840968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.865263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.865970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.866000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.888134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.888553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.888580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.909971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.910388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.910415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.930153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.930691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.930732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.551 [2024-05-15 02:38:35.954215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.551 [2024-05-15 02:38:35.954987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.551 [2024-05-15 02:38:35.955015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:35.977202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:35.977739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:35.977766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.000050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.000536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.000562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.023099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.023645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.023673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.047534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.047928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.047962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.067801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.068454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.068496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.090844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.091406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.091449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.113705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.114399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.114442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.811 [2024-05-15 02:38:36.137294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.811 [2024-05-15 02:38:36.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.811 [2024-05-15 02:38:36.137981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.812 [2024-05-15 02:38:36.159040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.812 [2024-05-15 02:38:36.159448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.812 [2024-05-15 02:38:36.159474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.812 [2024-05-15 02:38:36.182761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.812 [2024-05-15 02:38:36.183351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.812 [2024-05-15 02:38:36.183383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.812 [2024-05-15 02:38:36.206674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:48.812 [2024-05-15 02:38:36.207336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.812 [2024-05-15 02:38:36.207382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.070 [2024-05-15 02:38:36.229230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.070 [2024-05-15 02:38:36.229849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.070 [2024-05-15 02:38:36.229891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.070 [2024-05-15 02:38:36.252778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.070 [2024-05-15 02:38:36.253337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.070 [2024-05-15 02:38:36.253364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.070 [2024-05-15 02:38:36.274070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.070 [2024-05-15 02:38:36.274621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.274648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.298116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.298569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.298596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.323322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.323877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.323919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.347245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.347653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.347680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.370855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.371437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.371483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.392385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.392939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.392988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.416730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.417502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.417528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.436566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.437248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.437275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.460462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.461152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.461194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.071 [2024-05-15 02:38:36.482470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.071 [2024-05-15 02:38:36.482978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.071 [2024-05-15 02:38:36.483007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.505982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.506459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.506500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.528653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.529076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.529104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.552699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.553200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.553243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.577116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.577851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.577877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.599902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.600398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.600444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.624525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.624892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.624951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.643815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.644428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.644472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.665265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.665900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.665949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.685363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.685742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.685770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.708135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.708661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.708689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.329 [2024-05-15 02:38:36.729414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.329 [2024-05-15 02:38:36.729801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.329 [2024-05-15 02:38:36.729829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.750097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.750717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.750743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.772219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.772607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.772635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.791144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.791539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.791566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.811027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.811489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.811516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.833299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.833823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.833858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.858173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.858633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.858661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.881664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.882082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.588 [2024-05-15 02:38:36.882125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.588 [2024-05-15 02:38:36.905673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.588 [2024-05-15 02:38:36.906347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.589 [2024-05-15 02:38:36.906403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.589 [2024-05-15 02:38:36.929462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.589 [2024-05-15 02:38:36.930086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.589 [2024-05-15 02:38:36.930114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.589 [2024-05-15 02:38:36.952377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.589 [2024-05-15 02:38:36.953166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.589 [2024-05-15 02:38:36.953194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.589 [2024-05-15 02:38:36.975676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.589 [2024-05-15 02:38:36.976263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.589 [2024-05-15 02:38:36.976312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.589 [2024-05-15 02:38:36.999303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.589 [2024-05-15 02:38:36.999702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.589 [2024-05-15 02:38:36.999729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.020338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.020728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.042058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.042600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.042626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.063780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.064530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.064556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.086687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.087188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.087230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.109913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.110475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.110518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.133618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.134145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.134173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.156407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.156990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.157019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.176987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.177395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.177420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.198682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.199253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.221742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.222329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.222372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.848 [2024-05-15 02:38:37.247428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:49.848 [2024-05-15 02:38:37.248007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.848 [2024-05-15 02:38:37.248036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.271253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.271739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.108 [2024-05-15 02:38:37.271764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.293216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.293969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.108 [2024-05-15 02:38:37.293997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.317970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.318388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.108 [2024-05-15 02:38:37.318432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.340102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.340794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.108 [2024-05-15 02:38:37.340820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.363561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.364123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.108 [2024-05-15 02:38:37.364171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.387867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.388386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.108 [2024-05-15 02:38:37.388431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.108 [2024-05-15 02:38:37.412512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.108 [2024-05-15 02:38:37.413134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.109 [2024-05-15 02:38:37.413176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.109 [2024-05-15 02:38:37.436233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.109 [2024-05-15 02:38:37.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.109 [2024-05-15 02:38:37.436752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.109 [2024-05-15 02:38:37.459185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.109 [2024-05-15 02:38:37.459566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.109 [2024-05-15 02:38:37.459607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.109 [2024-05-15 02:38:37.479623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.109 [2024-05-15 02:38:37.480205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.109 [2024-05-15 02:38:37.480246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.109 [2024-05-15 02:38:37.504137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.109 [2024-05-15 02:38:37.504622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.109 [2024-05-15 02:38:37.504649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.527759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.367 [2024-05-15 02:38:37.528346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.367 [2024-05-15 02:38:37.528389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.551077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.367 [2024-05-15 02:38:37.551764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.367 [2024-05-15 02:38:37.551791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.571123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.367 [2024-05-15 02:38:37.571673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.367 [2024-05-15 02:38:37.571700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.592501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.367 [2024-05-15 02:38:37.593048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.367 [2024-05-15 02:38:37.593077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.616096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.367 [2024-05-15 02:38:37.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.367 [2024-05-15 02:38:37.616714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.637329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.367 [2024-05-15 02:38:37.637693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.367 [2024-05-15 02:38:37.637721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.367 [2024-05-15 02:38:37.660233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.368 [2024-05-15 02:38:37.660701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.368 [2024-05-15 02:38:37.660729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.368 [2024-05-15 02:38:37.682505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e3670) with pdu=0x2000190fef90 00:21:50.368 [2024-05-15 02:38:37.683007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.368 [2024-05-15 02:38:37.683035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.368 00:21:50.368 Latency(us) 00:21:50.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.368 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:50.368 nvme0n1 : 2.01 1359.62 169.95 0.00 0.00 11731.95 9029.40 25826.04 00:21:50.368 =================================================================================================================== 00:21:50.368 Total : 1359.62 169.95 0.00 0.00 11731.95 9029.40 25826.04 00:21:50.368 0 00:21:50.368 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:50.368 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:50.368 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:50.368 | .driver_specific 00:21:50.368 | .nvme_error 00:21:50.368 | .status_code 00:21:50.368 | .command_transient_transport_error' 00:21:50.368 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 88 > 0 )) 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2395972 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2395972 ']' 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2395972 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2395972 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2395972' 00:21:50.626 killing process with pid 2395972 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2395972 00:21:50.626 Received shutdown signal, test time was about 2.000000 seconds 00:21:50.626 00:21:50.626 Latency(us) 00:21:50.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.626 =================================================================================================================== 00:21:50.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.626 02:38:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2395972 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2394448 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2394448 ']' 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2394448 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2394448 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2394448' 00:21:50.886 killing process with pid 2394448 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2394448 00:21:50.886 [2024-05-15 02:38:38.291391] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:50.886 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2394448 00:21:51.461 00:21:51.461 real 0m17.442s 00:21:51.461 user 0m32.830s 00:21:51.461 sys 0m4.215s 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:51.461 ************************************ 00:21:51.461 END TEST nvmf_digest_error 00:21:51.461 ************************************ 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.461 rmmod nvme_tcp 00:21:51.461 rmmod nvme_fabrics 00:21:51.461 rmmod nvme_keyring 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2394448 ']' 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2394448 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 2394448 ']' 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 2394448 00:21:51.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2394448) - No such process 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 2394448 is not found' 00:21:51.461 Process with pid 2394448 is not found 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.461 02:38:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.369 02:38:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.369 00:21:53.369 real 0m39.872s 00:21:53.369 user 1m8.886s 00:21:53.369 sys 0m10.001s 00:21:53.369 02:38:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:53.369 02:38:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.369 ************************************ 00:21:53.369 END TEST nvmf_digest 00:21:53.369 ************************************ 00:21:53.369 02:38:40 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:21:53.369 02:38:40 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:21:53.369 02:38:40 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:21:53.369 02:38:40 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:53.369 02:38:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:53.369 02:38:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:53.369 02:38:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.369 ************************************ 00:21:53.369 START TEST nvmf_bdevperf 00:21:53.369 ************************************ 00:21:53.369 02:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:53.629 * Looking for test storage... 00:21:53.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.629 02:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.165 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:21:56.166 00:21:56.166 --- 10.0.0.2 ping statistics --- 00:21:56.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.166 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:21:56.166 00:21:56.166 --- 10.0.0.1 ping statistics --- 00:21:56.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.166 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2398726 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2398726 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2398726 ']' 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:56.166 02:38:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:56.166 [2024-05-15 02:38:43.538661] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:56.166 [2024-05-15 02:38:43.538754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.424 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.424 [2024-05-15 02:38:43.623048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:56.424 [2024-05-15 02:38:43.748106] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.424 [2024-05-15 02:38:43.748181] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.424 [2024-05-15 02:38:43.748208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.424 [2024-05-15 02:38:43.748221] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.424 [2024-05-15 02:38:43.748233] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.424 [2024-05-15 02:38:43.748346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.424 [2024-05-15 02:38:43.748411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.424 [2024-05-15 02:38:43.748414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.357 [2024-05-15 02:38:44.501262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.357 Malloc0 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.357 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.358 [2024-05-15 02:38:44.561802] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:57.358 [2024-05-15 02:38:44.562145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.358 { 00:21:57.358 "params": { 00:21:57.358 "name": "Nvme$subsystem", 00:21:57.358 "trtype": "$TEST_TRANSPORT", 00:21:57.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.358 "adrfam": "ipv4", 00:21:57.358 "trsvcid": "$NVMF_PORT", 00:21:57.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.358 "hdgst": ${hdgst:-false}, 00:21:57.358 "ddgst": ${ddgst:-false} 00:21:57.358 }, 00:21:57.358 "method": "bdev_nvme_attach_controller" 00:21:57.358 } 00:21:57.358 EOF 00:21:57.358 )") 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:21:57.358 02:38:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:57.358 "params": { 00:21:57.358 "name": "Nvme1", 00:21:57.358 "trtype": "tcp", 00:21:57.358 "traddr": "10.0.0.2", 00:21:57.358 "adrfam": "ipv4", 00:21:57.358 "trsvcid": "4420", 00:21:57.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.358 "hdgst": false, 00:21:57.358 "ddgst": false 00:21:57.358 }, 00:21:57.358 "method": "bdev_nvme_attach_controller" 00:21:57.358 }' 00:21:57.358 [2024-05-15 02:38:44.608829] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:57.358 [2024-05-15 02:38:44.608898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398881 ] 00:21:57.358 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.358 [2024-05-15 02:38:44.678647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.618 [2024-05-15 02:38:44.791313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.618 Running I/O for 1 seconds... 00:21:58.996 00:21:58.996 Latency(us) 00:21:58.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:58.996 Verification LBA range: start 0x0 length 0x4000 00:21:58.996 Nvme1n1 : 1.01 8294.47 32.40 0.00 0.00 15366.68 2936.98 17185.00 00:21:58.996 =================================================================================================================== 00:21:58.996 Total : 8294.47 32.40 0.00 0.00 15366.68 2936.98 17185.00 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2399139 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.996 { 00:21:58.996 "params": { 00:21:58.996 "name": "Nvme$subsystem", 00:21:58.996 "trtype": "$TEST_TRANSPORT", 00:21:58.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.996 "adrfam": "ipv4", 00:21:58.996 "trsvcid": "$NVMF_PORT", 00:21:58.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.996 "hdgst": ${hdgst:-false}, 00:21:58.996 "ddgst": ${ddgst:-false} 00:21:58.996 }, 00:21:58.996 "method": "bdev_nvme_attach_controller" 00:21:58.996 } 00:21:58.996 EOF 00:21:58.996 )") 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:21:58.996 02:38:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:58.996 "params": { 00:21:58.996 "name": "Nvme1", 00:21:58.996 "trtype": "tcp", 00:21:58.996 "traddr": "10.0.0.2", 00:21:58.996 "adrfam": "ipv4", 00:21:58.996 "trsvcid": "4420", 00:21:58.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.996 "hdgst": false, 00:21:58.996 "ddgst": false 00:21:58.996 }, 00:21:58.996 "method": "bdev_nvme_attach_controller" 00:21:58.996 }' 00:21:58.996 [2024-05-15 02:38:46.319797] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:21:58.996 [2024-05-15 02:38:46.319893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399139 ] 00:21:58.996 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.996 [2024-05-15 02:38:46.391164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.256 [2024-05-15 02:38:46.502872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.516 Running I/O for 15 seconds... 00:22:02.054 02:38:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2398726 00:22:02.054 02:38:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:22:02.054 [2024-05-15 02:38:49.292990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.054 [2024-05-15 02:38:49.293041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.054 [2024-05-15 02:38:49.293076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.054 [2024-05-15 02:38:49.293095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.054 [2024-05-15 02:38:49.293114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.055 [2024-05-15 02:38:49.293619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.293955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.293973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.055 [2024-05-15 02:38:49.294518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.055 [2024-05-15 02:38:49.294535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.294795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.294838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.294870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.294903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.294951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.294969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.056 [2024-05-15 02:38:49.295867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.056 [2024-05-15 02:38:49.295884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.056 [2024-05-15 02:38:49.295899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.295916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.295939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.295958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.295974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.296694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.296951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.296968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.057 [2024-05-15 02:38:49.297246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.297279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.057 [2024-05-15 02:38:49.297296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.057 [2024-05-15 02:38:49.297312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.058 [2024-05-15 02:38:49.297349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.058 [2024-05-15 02:38:49.297382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.058 [2024-05-15 02:38:49.297415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.058 [2024-05-15 02:38:49.297447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.058 [2024-05-15 02:38:49.297480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d770 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.297515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.058 [2024-05-15 02:38:49.297528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.058 [2024-05-15 02:38:49.297542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37584 len:8 PRP1 0x0 PRP2 0x0 00:22:02.058 [2024-05-15 02:38:49.297557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297628] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x247d770 was disconnected and freed. reset controller. 00:22:02.058 [2024-05-15 02:38:49.297707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.058 [2024-05-15 02:38:49.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.058 [2024-05-15 02:38:49.297764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.058 [2024-05-15 02:38:49.297804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.058 [2024-05-15 02:38:49.297835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.058 [2024-05-15 02:38:49.297850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.301730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.301776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.302495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.302724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.302751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.058 [2024-05-15 02:38:49.302768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.303029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.303272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.058 [2024-05-15 02:38:49.303309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.058 [2024-05-15 02:38:49.303328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.058 [2024-05-15 02:38:49.307005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.058 [2024-05-15 02:38:49.316089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.316589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.316843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.316872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.058 [2024-05-15 02:38:49.316890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.317141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.317404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.058 [2024-05-15 02:38:49.317429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.058 [2024-05-15 02:38:49.317445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.058 [2024-05-15 02:38:49.321115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.058 [2024-05-15 02:38:49.330147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.330663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.330947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.330992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.058 [2024-05-15 02:38:49.331008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.331240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.331502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.058 [2024-05-15 02:38:49.331527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.058 [2024-05-15 02:38:49.331543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.058 [2024-05-15 02:38:49.335218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.058 [2024-05-15 02:38:49.344108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.344574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.344907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.344961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.058 [2024-05-15 02:38:49.344997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.345215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.345481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.058 [2024-05-15 02:38:49.345506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.058 [2024-05-15 02:38:49.345522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.058 [2024-05-15 02:38:49.349177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.058 [2024-05-15 02:38:49.358212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.358670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.358947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.358986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.058 [2024-05-15 02:38:49.359006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.359248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.359494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.058 [2024-05-15 02:38:49.359519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.058 [2024-05-15 02:38:49.359535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.058 [2024-05-15 02:38:49.363159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.058 [2024-05-15 02:38:49.372322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.372855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.373132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.373160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.058 [2024-05-15 02:38:49.373176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.058 [2024-05-15 02:38:49.373437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.058 [2024-05-15 02:38:49.373683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.058 [2024-05-15 02:38:49.373708] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.058 [2024-05-15 02:38:49.373724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.058 [2024-05-15 02:38:49.377391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.058 [2024-05-15 02:38:49.386236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.058 [2024-05-15 02:38:49.386745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.058 [2024-05-15 02:38:49.387000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.387031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.059 [2024-05-15 02:38:49.387050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.059 [2024-05-15 02:38:49.387292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.059 [2024-05-15 02:38:49.387538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.059 [2024-05-15 02:38:49.387562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.059 [2024-05-15 02:38:49.387578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.059 [2024-05-15 02:38:49.391220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.059 [2024-05-15 02:38:49.400210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.059 [2024-05-15 02:38:49.400701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.401055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.401086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.059 [2024-05-15 02:38:49.401104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.059 [2024-05-15 02:38:49.401346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.059 [2024-05-15 02:38:49.401592] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.059 [2024-05-15 02:38:49.401616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.059 [2024-05-15 02:38:49.401632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.059 [2024-05-15 02:38:49.405258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.059 [2024-05-15 02:38:49.414147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.059 [2024-05-15 02:38:49.414701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.414958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.414992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.059 [2024-05-15 02:38:49.415026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.059 [2024-05-15 02:38:49.415287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.059 [2024-05-15 02:38:49.415533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.059 [2024-05-15 02:38:49.415557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.059 [2024-05-15 02:38:49.415573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.059 [2024-05-15 02:38:49.419175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.059 [2024-05-15 02:38:49.428159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.059 [2024-05-15 02:38:49.428637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.428882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.428941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.059 [2024-05-15 02:38:49.428967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.059 [2024-05-15 02:38:49.429210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.059 [2024-05-15 02:38:49.429465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.059 [2024-05-15 02:38:49.429489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.059 [2024-05-15 02:38:49.429505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.059 [2024-05-15 02:38:49.433132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.059 [2024-05-15 02:38:49.442106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.059 [2024-05-15 02:38:49.442557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.442825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.442851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.059 [2024-05-15 02:38:49.442866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.059 [2024-05-15 02:38:49.443110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.059 [2024-05-15 02:38:49.443357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.059 [2024-05-15 02:38:49.443381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.059 [2024-05-15 02:38:49.443397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.059 [2024-05-15 02:38:49.447026] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.059 [2024-05-15 02:38:49.455999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.059 [2024-05-15 02:38:49.456496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.456734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.059 [2024-05-15 02:38:49.456763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.059 [2024-05-15 02:38:49.456781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.059 [2024-05-15 02:38:49.457034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.059 [2024-05-15 02:38:49.457281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.059 [2024-05-15 02:38:49.457305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.059 [2024-05-15 02:38:49.457321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.059 [2024-05-15 02:38:49.460956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.469980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.470476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.470687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.470716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.470734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.470993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.471241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.471265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.471281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.474893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.483910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.484395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.484612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.484641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.484659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.484900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.485156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.485181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.485197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.488810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.497987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.498464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.498714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.498766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.498785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.499037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.499284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.499308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.499324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.502944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.511905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.512393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.512602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.512631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.512649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.512890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.513151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.513177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.513193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.516812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.525989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.526533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.526753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.526781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.526799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.527051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.527301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.527325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.527341] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.530967] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.539920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.540406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.540813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.540869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.540886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.541137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.541384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.541408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.541424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.545047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.554042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.554528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.554887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.554956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.319 [2024-05-15 02:38:49.554980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.319 [2024-05-15 02:38:49.555225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.319 [2024-05-15 02:38:49.555480] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.319 [2024-05-15 02:38:49.555510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.319 [2024-05-15 02:38:49.555527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.319 [2024-05-15 02:38:49.559225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.319 [2024-05-15 02:38:49.568005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.319 [2024-05-15 02:38:49.568485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.319 [2024-05-15 02:38:49.568770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.568799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.568817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.569068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.569316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.569340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.569356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.572975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.581951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.582399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.582617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.582645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.582663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.582904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.583160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.583185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.583201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.586815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.595988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.596471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.596680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.596708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.596726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.596979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.597224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.597248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.597270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.600885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.610066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.610595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.610955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.610985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.611005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.611246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.611492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.611515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.611532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.615152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.624120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.624583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.624842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.624890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.624908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.625160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.625406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.625430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.625446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.629071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.638032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.638505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.638789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.638817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.638834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.639086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.639332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.639357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.639373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.643008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.651970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.652447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.652801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.652818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.653071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.653318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.653342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.320 [2024-05-15 02:38:49.653358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.320 [2024-05-15 02:38:49.656983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.320 [2024-05-15 02:38:49.665949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.320 [2024-05-15 02:38:49.666424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.666671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.320 [2024-05-15 02:38:49.666723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.320 [2024-05-15 02:38:49.666741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.320 [2024-05-15 02:38:49.666996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.320 [2024-05-15 02:38:49.667242] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.320 [2024-05-15 02:38:49.667266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.321 [2024-05-15 02:38:49.667282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.321 [2024-05-15 02:38:49.670894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.321 [2024-05-15 02:38:49.679862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.321 [2024-05-15 02:38:49.680365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.680676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.680705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.321 [2024-05-15 02:38:49.680723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.321 [2024-05-15 02:38:49.680975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.321 [2024-05-15 02:38:49.681221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.321 [2024-05-15 02:38:49.681246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.321 [2024-05-15 02:38:49.681262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.321 [2024-05-15 02:38:49.684872] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.321 [2024-05-15 02:38:49.693875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.321 [2024-05-15 02:38:49.694370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.694806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.694855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.321 [2024-05-15 02:38:49.694873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.321 [2024-05-15 02:38:49.695129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.321 [2024-05-15 02:38:49.695376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.321 [2024-05-15 02:38:49.695400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.321 [2024-05-15 02:38:49.695416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.321 [2024-05-15 02:38:49.699187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.321 [2024-05-15 02:38:49.707964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.321 [2024-05-15 02:38:49.708457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.708644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.708673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.321 [2024-05-15 02:38:49.708691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.321 [2024-05-15 02:38:49.708943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.321 [2024-05-15 02:38:49.709189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.321 [2024-05-15 02:38:49.709214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.321 [2024-05-15 02:38:49.709230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.321 [2024-05-15 02:38:49.712841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.321 [2024-05-15 02:38:49.722024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.321 [2024-05-15 02:38:49.722505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.722716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.321 [2024-05-15 02:38:49.722745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.321 [2024-05-15 02:38:49.722762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.321 [2024-05-15 02:38:49.723015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.321 [2024-05-15 02:38:49.723261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.321 [2024-05-15 02:38:49.723286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.321 [2024-05-15 02:38:49.723302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.321 [2024-05-15 02:38:49.726916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.582 [2024-05-15 02:38:49.735964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.582 [2024-05-15 02:38:49.736495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.582 [2024-05-15 02:38:49.736804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.582 [2024-05-15 02:38:49.736853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.582 [2024-05-15 02:38:49.736871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.582 [2024-05-15 02:38:49.737142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.582 [2024-05-15 02:38:49.737389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.582 [2024-05-15 02:38:49.737414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.582 [2024-05-15 02:38:49.737430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.582 [2024-05-15 02:38:49.741062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.582 [2024-05-15 02:38:49.750025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.582 [2024-05-15 02:38:49.750502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.582 [2024-05-15 02:38:49.750806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.582 [2024-05-15 02:38:49.750854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.582 [2024-05-15 02:38:49.750872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.582 [2024-05-15 02:38:49.751124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.582 [2024-05-15 02:38:49.751370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.582 [2024-05-15 02:38:49.751394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.582 [2024-05-15 02:38:49.751410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.582 [2024-05-15 02:38:49.755029] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.582 [2024-05-15 02:38:49.763989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.582 [2024-05-15 02:38:49.764468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.582 [2024-05-15 02:38:49.764694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.582 [2024-05-15 02:38:49.764722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.582 [2024-05-15 02:38:49.764740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.582 [2024-05-15 02:38:49.764994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.582 [2024-05-15 02:38:49.765240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.765265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.765281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.768892] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.778079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.778555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.778795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.778828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.778862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.779115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.779373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.779398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.779414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.783033] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.791995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.792448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.792732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.792777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.792795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.793049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.793296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.793321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.793336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.796956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.806023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.806523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.806804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.806855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.806873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.807124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.807370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.807395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.807411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.811032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.819993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.820455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.820726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.820773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.820796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.821052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.821298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.821323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.821339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.824958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.833921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.834416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.834695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.834723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.834741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.834994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.835241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.835265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.835281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.838893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.847861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.848350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.848593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.848642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.848660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.848901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.849156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.849182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.849197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.852809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.861770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.862256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.862475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.862506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.862524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.862772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.863029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.863055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.863071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.866695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.875659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.876138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.876354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.583 [2024-05-15 02:38:49.876382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.583 [2024-05-15 02:38:49.876400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.583 [2024-05-15 02:38:49.876641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.583 [2024-05-15 02:38:49.876886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.583 [2024-05-15 02:38:49.876911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.583 [2024-05-15 02:38:49.876927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.583 [2024-05-15 02:38:49.880556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.583 [2024-05-15 02:38:49.889730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.583 [2024-05-15 02:38:49.890170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.890436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.890465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.890483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.890724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.890981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.891006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.891022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.894635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.903611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.904104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.904323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.904352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.904369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.904611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.904862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.904887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.904903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.908529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.917515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.918002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.918245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.918281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.918299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.918541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.918788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.918813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.918830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.922452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.931461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.931942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.932143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.932173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.932191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.932433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.932680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.932705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.932721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.936349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.945527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.946002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.946212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.946242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.946260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.946501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.946754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.946780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.946796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.950425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.959605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.960060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.960273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.960303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.960321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.960562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.960810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.960834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.960850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.964483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.973664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.974151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.974395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.974443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.974462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.974704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.974964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.974990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.975007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.978622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.584 [2024-05-15 02:38:49.987589] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.584 [2024-05-15 02:38:49.988078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.988414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.584 [2024-05-15 02:38:49.988444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.584 [2024-05-15 02:38:49.988462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.584 [2024-05-15 02:38:49.988704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.584 [2024-05-15 02:38:49.988964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.584 [2024-05-15 02:38:49.988990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.584 [2024-05-15 02:38:49.989015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.584 [2024-05-15 02:38:49.992659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.001968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.002549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.002845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.002886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.002913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.003240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.003520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.003550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.003567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.008260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.015992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.016587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.016804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.016835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.016855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.017113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.017362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.017388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.017405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.021051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.030051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.030582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.030863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.030918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.030946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.031192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.031439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.031464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.031490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.035109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.044088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.044701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.044979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.045010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.045029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.045273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.045520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.045545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.045561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.049192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.058035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.058556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.058809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.058839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.058857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.059144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.059392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.059417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.059433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.063092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.072111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.072651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.073007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.073038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.073057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.073298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.073545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.073570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.073587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.077218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.086196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.086675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.086969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.087000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.087019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.087260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.087507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.087531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.087547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.091172] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.100176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.100734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.100998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.101028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.101046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.101288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.101535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.846 [2024-05-15 02:38:50.101560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.846 [2024-05-15 02:38:50.101577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.846 [2024-05-15 02:38:50.105204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.846 [2024-05-15 02:38:50.114181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.846 [2024-05-15 02:38:50.114744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.114961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.846 [2024-05-15 02:38:50.114990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.846 [2024-05-15 02:38:50.115008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.846 [2024-05-15 02:38:50.115250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.846 [2024-05-15 02:38:50.115497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.115522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.115538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.119168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.128147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.128631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.128851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.128881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.128899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.129154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.129400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.129426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.129442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.133067] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.142036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.142521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.142966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.143028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.143047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.143288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.143533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.143558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.143574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.147201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.155962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.156442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.156811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.156861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.156879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.157134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.157381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.157406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.157424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.161050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.170022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.170501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.170846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.170899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.170917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.171170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.171417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.171442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.171458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.175086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.184057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.184549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.184996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.185026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.185044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.185285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.185530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.185556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.185572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.189231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.197997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.198491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.198690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.198718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.198736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.198993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.199241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.199266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.199282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.202901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.212092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.212568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.212885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.212951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.212978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.213220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.847 [2024-05-15 02:38:50.213467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.847 [2024-05-15 02:38:50.213492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.847 [2024-05-15 02:38:50.213508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.847 [2024-05-15 02:38:50.217134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.847 [2024-05-15 02:38:50.226108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.847 [2024-05-15 02:38:50.226567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.226788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.847 [2024-05-15 02:38:50.226820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.847 [2024-05-15 02:38:50.226838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.847 [2024-05-15 02:38:50.227095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.848 [2024-05-15 02:38:50.227341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.848 [2024-05-15 02:38:50.227367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.848 [2024-05-15 02:38:50.227382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.848 [2024-05-15 02:38:50.231011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.848 [2024-05-15 02:38:50.240194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.848 [2024-05-15 02:38:50.240673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.848 [2024-05-15 02:38:50.240965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.848 [2024-05-15 02:38:50.240996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.848 [2024-05-15 02:38:50.241014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.848 [2024-05-15 02:38:50.241257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.848 [2024-05-15 02:38:50.241504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.848 [2024-05-15 02:38:50.241529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.848 [2024-05-15 02:38:50.241545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.848 [2024-05-15 02:38:50.245173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.848 [2024-05-15 02:38:50.254158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.848 [2024-05-15 02:38:50.254618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.848 [2024-05-15 02:38:50.254798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.848 [2024-05-15 02:38:50.254826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:02.848 [2024-05-15 02:38:50.254849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:02.848 [2024-05-15 02:38:50.255106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:02.848 [2024-05-15 02:38:50.255353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.848 [2024-05-15 02:38:50.255378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.848 [2024-05-15 02:38:50.255393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.108 [2024-05-15 02:38:50.259051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.108 [2024-05-15 02:38:50.268078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.108 [2024-05-15 02:38:50.268557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.268772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.268802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.108 [2024-05-15 02:38:50.268820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.108 [2024-05-15 02:38:50.269076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.108 [2024-05-15 02:38:50.269324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.108 [2024-05-15 02:38:50.269349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.108 [2024-05-15 02:38:50.269365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.108 [2024-05-15 02:38:50.272994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.108 [2024-05-15 02:38:50.281969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.108 [2024-05-15 02:38:50.282446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.282732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.282781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.108 [2024-05-15 02:38:50.282799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.108 [2024-05-15 02:38:50.283054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.108 [2024-05-15 02:38:50.283300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.108 [2024-05-15 02:38:50.283324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.108 [2024-05-15 02:38:50.283340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.108 [2024-05-15 02:38:50.286964] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.108 [2024-05-15 02:38:50.295937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.108 [2024-05-15 02:38:50.296426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.296641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.296669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.108 [2024-05-15 02:38:50.296687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.108 [2024-05-15 02:38:50.296946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.108 [2024-05-15 02:38:50.297193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.108 [2024-05-15 02:38:50.297218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.108 [2024-05-15 02:38:50.297234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.108 [2024-05-15 02:38:50.300851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.108 [2024-05-15 02:38:50.309937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.108 [2024-05-15 02:38:50.310463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.310755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.310808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.108 [2024-05-15 02:38:50.310826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.108 [2024-05-15 02:38:50.311081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.108 [2024-05-15 02:38:50.311327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.108 [2024-05-15 02:38:50.311352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.108 [2024-05-15 02:38:50.311368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.108 [2024-05-15 02:38:50.315034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.108 [2024-05-15 02:38:50.323815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.108 [2024-05-15 02:38:50.324315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.324497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.324529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.108 [2024-05-15 02:38:50.324547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.108 [2024-05-15 02:38:50.324791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.108 [2024-05-15 02:38:50.325050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.108 [2024-05-15 02:38:50.325077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.108 [2024-05-15 02:38:50.325093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.108 [2024-05-15 02:38:50.328719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.108 [2024-05-15 02:38:50.338149] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.108 [2024-05-15 02:38:50.338629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.338805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.108 [2024-05-15 02:38:50.338835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.108 [2024-05-15 02:38:50.338853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.339106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.339361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.339387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.339403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.343031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.352229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.352702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.352890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.352919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.352947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.353191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.353436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.353461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.353477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.357116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.366307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.366767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.366991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.367021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.367039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.367281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.367528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.367554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.367570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.371197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.380509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.380995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.381196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.381225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.381244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.381485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.381732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.381761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.381779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.385414] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.394402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.394885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.395102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.395132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.395150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.395391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.395638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.395663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.395679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.399309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.408296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.408759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.408947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.408977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.408996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.409237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.409483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.409507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.409523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.413149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.422335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.422809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.423051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.423082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.423101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.423342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.423587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.423612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.423634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.427262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.436242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.436725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.436973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.437003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.437022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.437264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.437510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.437535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.437551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.441176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.450149] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.450672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.450947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.450999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.451017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.451258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.451503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.451527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.451543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.455165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.464146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.464611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.464834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.109 [2024-05-15 02:38:50.464864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.109 [2024-05-15 02:38:50.464882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.109 [2024-05-15 02:38:50.465145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.109 [2024-05-15 02:38:50.465393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.109 [2024-05-15 02:38:50.465418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.109 [2024-05-15 02:38:50.465434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.109 [2024-05-15 02:38:50.469081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.109 [2024-05-15 02:38:50.478053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.109 [2024-05-15 02:38:50.478539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.478799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.478848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.110 [2024-05-15 02:38:50.478867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.110 [2024-05-15 02:38:50.479121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.110 [2024-05-15 02:38:50.479367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.110 [2024-05-15 02:38:50.479391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.110 [2024-05-15 02:38:50.479407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.110 [2024-05-15 02:38:50.483029] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.110 [2024-05-15 02:38:50.491989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.110 [2024-05-15 02:38:50.492464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.492752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.492797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.110 [2024-05-15 02:38:50.492815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.110 [2024-05-15 02:38:50.493065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.110 [2024-05-15 02:38:50.493311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.110 [2024-05-15 02:38:50.493336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.110 [2024-05-15 02:38:50.493352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.110 [2024-05-15 02:38:50.496977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.110 [2024-05-15 02:38:50.505947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.110 [2024-05-15 02:38:50.506434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.506705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.506734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.110 [2024-05-15 02:38:50.506752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.110 [2024-05-15 02:38:50.507005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.110 [2024-05-15 02:38:50.507263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.110 [2024-05-15 02:38:50.507288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.110 [2024-05-15 02:38:50.507304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.110 [2024-05-15 02:38:50.510918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.110 [2024-05-15 02:38:50.519935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.110 [2024-05-15 02:38:50.520418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.520643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.110 [2024-05-15 02:38:50.520671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.110 [2024-05-15 02:38:50.520689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.110 [2024-05-15 02:38:50.520944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.110 [2024-05-15 02:38:50.521195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.110 [2024-05-15 02:38:50.521221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.110 [2024-05-15 02:38:50.521237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.524870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.533884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.534373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.534593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.534624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.534643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.534886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.535148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.535174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.535190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.538808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.547813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.548314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.548545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.548575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.548593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.548834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.549092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.549118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.549133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.552755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.561855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.562363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.562559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.562587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.562605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.562847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.563104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.563129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.563145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.566764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.575741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.576240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.576548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.576595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.576614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.576857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.577117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.577143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.577160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.580780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.589760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.590245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.590491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.590522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.590540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.590782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.591043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.591070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.591086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.594706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.603683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.604156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.604490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.604542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.604562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.604803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.605064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.605091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.605106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.608721] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.617693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.618256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.618557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.618587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.372 [2024-05-15 02:38:50.618605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.372 [2024-05-15 02:38:50.618848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.372 [2024-05-15 02:38:50.619107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.372 [2024-05-15 02:38:50.619132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.372 [2024-05-15 02:38:50.619148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.372 [2024-05-15 02:38:50.622764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.372 [2024-05-15 02:38:50.631747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.372 [2024-05-15 02:38:50.632246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.632462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.372 [2024-05-15 02:38:50.632492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.632510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.632753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.633014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.633040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.633057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.636724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.645702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.646212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.646477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.646507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.646536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.646779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.647041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.647067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.647084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.650701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.659681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.660153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.660370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.660399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.660418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.660659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.660906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.660942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.660961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.664578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.673764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.674227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.674407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.674435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.674453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.674695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.674955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.674980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.674996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.678611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.687800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.688259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.688466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.688494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.688513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.688759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.689019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.689045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.689061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.692679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.701863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.702303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.702488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.702518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.702536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.702778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.703042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.703068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.703083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.706702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.715889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.716361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.716603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.716633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.716652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.716894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.717155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.717181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.717196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.720815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.729800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.730273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.730547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.373 [2024-05-15 02:38:50.730578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.373 [2024-05-15 02:38:50.730597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.373 [2024-05-15 02:38:50.730840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.373 [2024-05-15 02:38:50.731108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.373 [2024-05-15 02:38:50.731135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.373 [2024-05-15 02:38:50.731151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.373 [2024-05-15 02:38:50.734767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.373 [2024-05-15 02:38:50.743747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.373 [2024-05-15 02:38:50.744244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.374 [2024-05-15 02:38:50.744534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.374 [2024-05-15 02:38:50.744563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.374 [2024-05-15 02:38:50.744581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.374 [2024-05-15 02:38:50.744823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.374 [2024-05-15 02:38:50.745083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.374 [2024-05-15 02:38:50.745110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.374 [2024-05-15 02:38:50.745126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.374 [2024-05-15 02:38:50.748743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.374 [2024-05-15 02:38:50.757719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.374 [2024-05-15 02:38:50.758211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.374 [2024-05-15 02:38:50.758462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.374 [2024-05-15 02:38:50.758509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.374 [2024-05-15 02:38:50.758527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.374 [2024-05-15 02:38:50.758769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.374 [2024-05-15 02:38:50.759030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.374 [2024-05-15 02:38:50.759056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.374 [2024-05-15 02:38:50.759073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.374 [2024-05-15 02:38:50.762691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.374 [2024-05-15 02:38:50.771670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.374 [2024-05-15 02:38:50.772135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.374 [2024-05-15 02:38:50.772330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.374 [2024-05-15 02:38:50.772360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.374 [2024-05-15 02:38:50.772378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.374 [2024-05-15 02:38:50.772621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.374 [2024-05-15 02:38:50.772869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.374 [2024-05-15 02:38:50.772899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.374 [2024-05-15 02:38:50.772915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.374 [2024-05-15 02:38:50.776547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.785801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.786292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.786502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.786549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.786567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.786810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.787071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.787098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.787114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.790756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.799744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.800327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.800608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.800655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.800673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.800915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.801176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.801201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.801218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.804834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.813671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.814156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.814583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.814634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.814652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.814895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.815151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.815178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.815199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.818846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.827640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.828119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.828305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.828333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.828351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.828592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.828838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.828863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.828878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.832514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.841703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.842191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.842404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.842434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.842451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.842693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.842955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.842981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.842997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.846616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.855595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.856046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.856296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.856326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.856344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.856585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.856832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.856857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.856873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.860504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.869688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.870149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.870362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.870392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.870410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.870652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.870899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.870924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.634 [2024-05-15 02:38:50.870955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.634 [2024-05-15 02:38:50.874574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.634 [2024-05-15 02:38:50.883757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.634 [2024-05-15 02:38:50.884245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.884433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.634 [2024-05-15 02:38:50.884463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.634 [2024-05-15 02:38:50.884480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.634 [2024-05-15 02:38:50.884723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.634 [2024-05-15 02:38:50.884986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.634 [2024-05-15 02:38:50.885012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.885029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.888646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.897836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.898347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.898615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.898644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.898662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.898904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.899163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.899189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.899206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.902826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.911814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.915947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.916214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.916246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.916264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.916510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.916756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.916783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.916800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.920454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.925810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.926309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.926594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.926623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.926642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.926884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.927144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.927169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.927197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.930828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.939808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.940280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.940533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.940561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.940579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.940820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.941078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.941103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.941119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.944738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.953712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.954201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.954467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.954498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.954517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.954759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.955028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.955055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.955070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.958693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.967672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.968153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.968395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.968425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.968443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.968683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.968939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.968963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.968979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.972599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.981579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.982030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.982244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.982272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.982290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.982531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.982776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.982800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.982816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:50.986442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:50.995618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:50.996098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.996495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:50.996552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:50.996571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:50.996811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:50.997070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:50.997095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:50.997111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:51.000724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:51.009687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:51.010145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:51.010419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:51.010447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:51.010466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.635 [2024-05-15 02:38:51.010707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.635 [2024-05-15 02:38:51.010965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.635 [2024-05-15 02:38:51.010990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.635 [2024-05-15 02:38:51.011006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.635 [2024-05-15 02:38:51.014618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.635 [2024-05-15 02:38:51.023583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.635 [2024-05-15 02:38:51.024061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:51.024400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.635 [2024-05-15 02:38:51.024451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.635 [2024-05-15 02:38:51.024468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.636 [2024-05-15 02:38:51.024709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.636 [2024-05-15 02:38:51.024966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.636 [2024-05-15 02:38:51.024991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.636 [2024-05-15 02:38:51.025008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.636 [2024-05-15 02:38:51.028620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.636 [2024-05-15 02:38:51.037607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.636 [2024-05-15 02:38:51.038062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.636 [2024-05-15 02:38:51.038255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.636 [2024-05-15 02:38:51.038284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.636 [2024-05-15 02:38:51.038307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.636 [2024-05-15 02:38:51.038548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.636 [2024-05-15 02:38:51.038794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.636 [2024-05-15 02:38:51.038819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.636 [2024-05-15 02:38:51.038835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.636 [2024-05-15 02:38:51.042458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.895 [2024-05-15 02:38:51.051497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.895 [2024-05-15 02:38:51.052017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.895 [2024-05-15 02:38:51.052259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.895 [2024-05-15 02:38:51.052288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.895 [2024-05-15 02:38:51.052306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.895 [2024-05-15 02:38:51.052555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.895 [2024-05-15 02:38:51.052803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.895 [2024-05-15 02:38:51.052828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.895 [2024-05-15 02:38:51.052844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.895 [2024-05-15 02:38:51.056481] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.895 [2024-05-15 02:38:51.065575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.895 [2024-05-15 02:38:51.066056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.895 [2024-05-15 02:38:51.066273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.895 [2024-05-15 02:38:51.066321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.895 [2024-05-15 02:38:51.066339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.895 [2024-05-15 02:38:51.066581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.895 [2024-05-15 02:38:51.066827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.895 [2024-05-15 02:38:51.066852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.895 [2024-05-15 02:38:51.066868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.895 [2024-05-15 02:38:51.070491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.895 [2024-05-15 02:38:51.079667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.895 [2024-05-15 02:38:51.080149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.895 [2024-05-15 02:38:51.080389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.895 [2024-05-15 02:38:51.080423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.895 [2024-05-15 02:38:51.080458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.895 [2024-05-15 02:38:51.080705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.895 [2024-05-15 02:38:51.080963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.080989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.081004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.084616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.093578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.094059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.094253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.094284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.094302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.094544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.094791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.094815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.094831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.098456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.107630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.108081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.108361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.108390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.108408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.108649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.108895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.108920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.108946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.112564] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.121527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.122197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.122243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.122261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.122502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.122754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.122779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.122795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.126418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.135599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.136089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.136307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.136336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.136354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.136595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.136841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.136865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.136881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.140507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.149685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.150166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.150380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.150411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.150429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.150670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.150917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.150952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.150969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.154583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.163759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.164251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.164467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.164496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.164514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.164756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.165013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.165044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.165061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.168675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.177643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.178094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.178373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.178402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.178419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.178660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.178906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.178940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.178958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.182577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.191543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.192017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.192199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.192230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.896 [2024-05-15 02:38:51.192248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.896 [2024-05-15 02:38:51.192489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.896 [2024-05-15 02:38:51.192735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.896 [2024-05-15 02:38:51.192760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.896 [2024-05-15 02:38:51.192776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.896 [2024-05-15 02:38:51.196401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.896 [2024-05-15 02:38:51.205574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.896 [2024-05-15 02:38:51.206053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.896 [2024-05-15 02:38:51.206269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.206297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.206315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.206556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.206802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.206826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.206848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.210475] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.219647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.220129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.220391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.220438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.220457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.220698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.220955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.220980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.220996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.224611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.233581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.234060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.234388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.234445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.234463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.234704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.234960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.234995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.235010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.238628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.247588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.248065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.248355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.248400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.248417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.248659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.248905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.248940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.248959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.252578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.261539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.262017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.262234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.262264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.262282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.262524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.262770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.262794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.262810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.266435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.275605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.276095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.276339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.276368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.276386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.276627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.276873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.276898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.276914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.280541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.289505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.289979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.290385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.290439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.290457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.290697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.290954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.290979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.290995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.897 [2024-05-15 02:38:51.294610] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.897 [2024-05-15 02:38:51.303577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.897 [2024-05-15 02:38:51.304069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.304278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.897 [2024-05-15 02:38:51.304326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:03.897 [2024-05-15 02:38:51.304344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:03.897 [2024-05-15 02:38:51.304591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:03.897 [2024-05-15 02:38:51.304843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.897 [2024-05-15 02:38:51.304869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.897 [2024-05-15 02:38:51.304885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.157 [2024-05-15 02:38:51.308542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.157 [2024-05-15 02:38:51.317662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.157 [2024-05-15 02:38:51.318142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.318437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.318469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.157 [2024-05-15 02:38:51.318487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.157 [2024-05-15 02:38:51.318729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.157 [2024-05-15 02:38:51.318986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.157 [2024-05-15 02:38:51.319012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.157 [2024-05-15 02:38:51.319028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.157 [2024-05-15 02:38:51.322639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.157 [2024-05-15 02:38:51.331609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.157 [2024-05-15 02:38:51.332065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.332384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.332413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.157 [2024-05-15 02:38:51.332431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.157 [2024-05-15 02:38:51.332673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.157 [2024-05-15 02:38:51.332919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.157 [2024-05-15 02:38:51.332953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.157 [2024-05-15 02:38:51.332970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.157 [2024-05-15 02:38:51.336587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.157 [2024-05-15 02:38:51.345576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.157 [2024-05-15 02:38:51.346067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.346330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.346377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.157 [2024-05-15 02:38:51.346396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.157 [2024-05-15 02:38:51.346638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.157 [2024-05-15 02:38:51.346883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.157 [2024-05-15 02:38:51.346908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.157 [2024-05-15 02:38:51.346924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.157 [2024-05-15 02:38:51.350549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.157 [2024-05-15 02:38:51.359514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.157 [2024-05-15 02:38:51.359990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.360194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.360224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.157 [2024-05-15 02:38:51.360242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.157 [2024-05-15 02:38:51.360483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.157 [2024-05-15 02:38:51.360729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.157 [2024-05-15 02:38:51.360753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.157 [2024-05-15 02:38:51.360769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.157 [2024-05-15 02:38:51.364390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.157 [2024-05-15 02:38:51.373561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.157 [2024-05-15 02:38:51.374038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.374277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.374306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.157 [2024-05-15 02:38:51.374324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.157 [2024-05-15 02:38:51.374565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.157 [2024-05-15 02:38:51.374811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.157 [2024-05-15 02:38:51.374835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.157 [2024-05-15 02:38:51.374851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.157 [2024-05-15 02:38:51.378653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.157 [2024-05-15 02:38:51.387615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.157 [2024-05-15 02:38:51.388092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.388312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.157 [2024-05-15 02:38:51.388347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.157 [2024-05-15 02:38:51.388365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.157 [2024-05-15 02:38:51.388607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.388852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.388877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.388893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.392516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.401697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.402136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.402377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.402406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.402425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.402666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.402911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.402944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.402962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.406582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.415766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.416235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.416440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.416469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.416487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.416729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.416986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.417011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.417026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.420642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.429823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.430314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.430554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.430584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.430607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.430849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.431107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.431132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.431148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.434762] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.443724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.444231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.444502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.444530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.444548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.444789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.445047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.445071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.445087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.448701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.457664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.458148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.458417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.458465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.458483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.458724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.458981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.459007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.459023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.462639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.471630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.472109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.472336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.472365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.472382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.472629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.472876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.472900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.472916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.476540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.485717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.486175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.486387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.486416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.486434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.486675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.486921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.486956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.486973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.490588] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.499800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.500242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.500451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.500480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.500498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.500740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.500996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.501021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.501038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.504655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.513838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.514278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.514500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.158 [2024-05-15 02:38:51.514534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.158 [2024-05-15 02:38:51.514568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.158 [2024-05-15 02:38:51.514809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.158 [2024-05-15 02:38:51.515071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.158 [2024-05-15 02:38:51.515097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.158 [2024-05-15 02:38:51.515112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.158 [2024-05-15 02:38:51.518727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.158 [2024-05-15 02:38:51.527914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.158 [2024-05-15 02:38:51.528372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.159 [2024-05-15 02:38:51.528611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.159 [2024-05-15 02:38:51.528659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.159 [2024-05-15 02:38:51.528678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.159 [2024-05-15 02:38:51.528919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.159 [2024-05-15 02:38:51.529180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.159 [2024-05-15 02:38:51.529205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.159 [2024-05-15 02:38:51.529221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.159 [2024-05-15 02:38:51.532830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.159 [2024-05-15 02:38:51.542013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.159 [2024-05-15 02:38:51.542531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.159 [2024-05-15 02:38:51.542821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.159 [2024-05-15 02:38:51.542872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.159 [2024-05-15 02:38:51.542891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.159 [2024-05-15 02:38:51.543139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.159 [2024-05-15 02:38:51.543385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.159 [2024-05-15 02:38:51.543410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.159 [2024-05-15 02:38:51.543426] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.159 [2024-05-15 02:38:51.547047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.159 [2024-05-15 02:38:51.556025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.159 [2024-05-15 02:38:51.556456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.159 [2024-05-15 02:38:51.556714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.159 [2024-05-15 02:38:51.556744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.159 [2024-05-15 02:38:51.556762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.159 [2024-05-15 02:38:51.557027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.159 [2024-05-15 02:38:51.557276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.159 [2024-05-15 02:38:51.557309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.159 [2024-05-15 02:38:51.557326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.159 [2024-05-15 02:38:51.560953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.159 [2024-05-15 02:38:51.570066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.420 [2024-05-15 02:38:51.570542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.420 [2024-05-15 02:38:51.570760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.420 [2024-05-15 02:38:51.570791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.420 [2024-05-15 02:38:51.570813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.420 [2024-05-15 02:38:51.571070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.420 [2024-05-15 02:38:51.571318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.420 [2024-05-15 02:38:51.571343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.420 [2024-05-15 02:38:51.571359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.420 [2024-05-15 02:38:51.574993] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.420 [2024-05-15 02:38:51.583982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.420 [2024-05-15 02:38:51.584467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.420 [2024-05-15 02:38:51.584752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.420 [2024-05-15 02:38:51.584781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.420 [2024-05-15 02:38:51.584799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.420 [2024-05-15 02:38:51.585050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.420 [2024-05-15 02:38:51.585296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.420 [2024-05-15 02:38:51.585320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.420 [2024-05-15 02:38:51.585336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.420 [2024-05-15 02:38:51.588957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.420 [2024-05-15 02:38:51.597927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.420 [2024-05-15 02:38:51.598435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.420 [2024-05-15 02:38:51.598614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.420 [2024-05-15 02:38:51.598643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.420 [2024-05-15 02:38:51.598661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.598901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.599163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.599188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.599221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.602835] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.612033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.612547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.612757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.612787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.612806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.613060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.613306] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.613330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.613346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.616963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.625924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.626498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.626741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.626770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.626787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.627051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.627297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.627322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.627344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.630972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.639941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.640471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.640704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.640733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.640750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.641001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.641248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.641273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.641289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.644909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.653886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.654368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.654628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.654657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.654675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.654916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.655174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.655198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.655214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.658835] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.667820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.668283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.668505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.668534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.668551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.668792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.669048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.669073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.669088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.672702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.681889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.682432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.682731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.682776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.682794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.683045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.683299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.683323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.683339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.686964] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.695942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.696404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.696655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.696701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.696720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.696971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.697218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.697242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.697258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.700869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.709835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.710321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.710541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.710570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.710588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.710829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.711085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.711110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.711126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.714739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.723906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.724385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.724602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.421 [2024-05-15 02:38:51.724631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.421 [2024-05-15 02:38:51.724648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.421 [2024-05-15 02:38:51.724890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.421 [2024-05-15 02:38:51.725146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.421 [2024-05-15 02:38:51.725171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.421 [2024-05-15 02:38:51.725187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.421 [2024-05-15 02:38:51.728798] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.421 [2024-05-15 02:38:51.737980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.421 [2024-05-15 02:38:51.738528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.738791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.738839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.738857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.739109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.739356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.739380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.739395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.743016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.422 [2024-05-15 02:38:51.751977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.422 [2024-05-15 02:38:51.752447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.752656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.752685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.752702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.752954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.753200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.753223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.753239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.756851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.422 [2024-05-15 02:38:51.766058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.422 [2024-05-15 02:38:51.766538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.766716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.766745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.766763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.767017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.767264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.767288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.767304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.770919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.422 [2024-05-15 02:38:51.780105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.422 [2024-05-15 02:38:51.780555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.780876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.780911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.780939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.781184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.781430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.781456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.781472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.785097] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.422 [2024-05-15 02:38:51.794066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.422 [2024-05-15 02:38:51.794542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.794753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.794802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.794820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.795077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.795324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.795349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.795365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.798990] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.422 [2024-05-15 02:38:51.807972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.422 [2024-05-15 02:38:51.808448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.808699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.808746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.808765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.809021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.809269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.809294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.809310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.812927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.422 [2024-05-15 02:38:51.821993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.422 [2024-05-15 02:38:51.822570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.822842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.422 [2024-05-15 02:38:51.822872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.422 [2024-05-15 02:38:51.822896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.422 [2024-05-15 02:38:51.823156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.422 [2024-05-15 02:38:51.823409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.422 [2024-05-15 02:38:51.823434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.422 [2024-05-15 02:38:51.823450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.422 [2024-05-15 02:38:51.827120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.683 [2024-05-15 02:38:51.836004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.683 [2024-05-15 02:38:51.836519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.683 [2024-05-15 02:38:51.836748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.683 [2024-05-15 02:38:51.836780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.683 [2024-05-15 02:38:51.836799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.683 [2024-05-15 02:38:51.837056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.683 [2024-05-15 02:38:51.837303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.683 [2024-05-15 02:38:51.837328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.683 [2024-05-15 02:38:51.837345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.683 [2024-05-15 02:38:51.841003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.683 [2024-05-15 02:38:51.849982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.683 [2024-05-15 02:38:51.850522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.683 [2024-05-15 02:38:51.850783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.683 [2024-05-15 02:38:51.850813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.683 [2024-05-15 02:38:51.850831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.683 [2024-05-15 02:38:51.851088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.683 [2024-05-15 02:38:51.851334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.683 [2024-05-15 02:38:51.851359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.683 [2024-05-15 02:38:51.851375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.683 [2024-05-15 02:38:51.854999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.683 [2024-05-15 02:38:51.863970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.683 [2024-05-15 02:38:51.864535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.683 [2024-05-15 02:38:51.864810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.683 [2024-05-15 02:38:51.864841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.683 [2024-05-15 02:38:51.864860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.683 [2024-05-15 02:38:51.865123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.683 [2024-05-15 02:38:51.865370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.683 [2024-05-15 02:38:51.865395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.683 [2024-05-15 02:38:51.865411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.683 [2024-05-15 02:38:51.869035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.683 [2024-05-15 02:38:51.878001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.683 [2024-05-15 02:38:51.878569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.878809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.878841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.878859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.879117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.879362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.879388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.879404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.883027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.891996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.892496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.892741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.892789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.892808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.893064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.893310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.893335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.893351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.896975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.905946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.906427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.906637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.906665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.906684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.906926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.907190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.907216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.907232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.910849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.919823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.920317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.920548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.920596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.920614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.920857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.921119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.921144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.921160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.924776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.933758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.934229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.934506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.934552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.934571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.934814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.935072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.935098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.935113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.938729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.947698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.948165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.948371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.948399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.948417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.948659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.948906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.948949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.948968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.952589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.961769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.962233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.962468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.962515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.962533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.962775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.963037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.963072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.963089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.966710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.975689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.976152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.976393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.976422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.976440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.976682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.976940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.976965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.976981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.980598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:51.989773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:51.990258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.990475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:51.990504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:51.990522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:51.990764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:51.991025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:51.991051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:51.991072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.684 [2024-05-15 02:38:51.994690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.684 [2024-05-15 02:38:52.003661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.684 [2024-05-15 02:38:52.004145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:52.004457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.684 [2024-05-15 02:38:52.004486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.684 [2024-05-15 02:38:52.004504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.684 [2024-05-15 02:38:52.004745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.684 [2024-05-15 02:38:52.005002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.684 [2024-05-15 02:38:52.005028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.684 [2024-05-15 02:38:52.005045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.008658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.685 [2024-05-15 02:38:52.017627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.685 [2024-05-15 02:38:52.018105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.018324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.018353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.685 [2024-05-15 02:38:52.018371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.685 [2024-05-15 02:38:52.018612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.685 [2024-05-15 02:38:52.018859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.685 [2024-05-15 02:38:52.018884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.685 [2024-05-15 02:38:52.018901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.022530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.685 [2024-05-15 02:38:52.031714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.685 [2024-05-15 02:38:52.032200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.032463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.032514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.685 [2024-05-15 02:38:52.032532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.685 [2024-05-15 02:38:52.032773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.685 [2024-05-15 02:38:52.033033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.685 [2024-05-15 02:38:52.033059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.685 [2024-05-15 02:38:52.033076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.036699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.685 [2024-05-15 02:38:52.045668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.685 [2024-05-15 02:38:52.046153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.046372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.046399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.685 [2024-05-15 02:38:52.046418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.685 [2024-05-15 02:38:52.046659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.685 [2024-05-15 02:38:52.046905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.685 [2024-05-15 02:38:52.046942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.685 [2024-05-15 02:38:52.046962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.050579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.685 [2024-05-15 02:38:52.059757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.685 [2024-05-15 02:38:52.060219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.060455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.060502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.685 [2024-05-15 02:38:52.060520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.685 [2024-05-15 02:38:52.060762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.685 [2024-05-15 02:38:52.061022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.685 [2024-05-15 02:38:52.061048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.685 [2024-05-15 02:38:52.061065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.064684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.685 [2024-05-15 02:38:52.073736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.685 [2024-05-15 02:38:52.074228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.074467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.074502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.685 [2024-05-15 02:38:52.074538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.685 [2024-05-15 02:38:52.074779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.685 [2024-05-15 02:38:52.075039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.685 [2024-05-15 02:38:52.075065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.685 [2024-05-15 02:38:52.075082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.078734] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.685 [2024-05-15 02:38:52.087748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.685 [2024-05-15 02:38:52.088216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.088499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.685 [2024-05-15 02:38:52.088528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.685 [2024-05-15 02:38:52.088547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.685 [2024-05-15 02:38:52.088788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.685 [2024-05-15 02:38:52.089046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.685 [2024-05-15 02:38:52.089072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.685 [2024-05-15 02:38:52.089088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.685 [2024-05-15 02:38:52.092724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.945 [2024-05-15 02:38:52.101770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.945 [2024-05-15 02:38:52.102353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.945 [2024-05-15 02:38:52.102664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.945 [2024-05-15 02:38:52.102694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.945 [2024-05-15 02:38:52.102713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.945 [2024-05-15 02:38:52.102974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.945 [2024-05-15 02:38:52.103225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.945 [2024-05-15 02:38:52.103250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.945 [2024-05-15 02:38:52.103267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.945 [2024-05-15 02:38:52.106884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.945 [2024-05-15 02:38:52.115853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.945 [2024-05-15 02:38:52.116489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.945 [2024-05-15 02:38:52.116817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.945 [2024-05-15 02:38:52.116846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.945 [2024-05-15 02:38:52.116864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.945 [2024-05-15 02:38:52.117120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.945 [2024-05-15 02:38:52.117366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.945 [2024-05-15 02:38:52.117392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.945 [2024-05-15 02:38:52.117408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.945 [2024-05-15 02:38:52.121027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.945 [2024-05-15 02:38:52.129781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.945 [2024-05-15 02:38:52.130472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.945 [2024-05-15 02:38:52.130906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.945 [2024-05-15 02:38:52.130986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.945 [2024-05-15 02:38:52.131005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.945 [2024-05-15 02:38:52.131247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.945 [2024-05-15 02:38:52.131493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.945 [2024-05-15 02:38:52.131518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.945 [2024-05-15 02:38:52.131534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.135157] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.143706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.144199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.144551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.144601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.144619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.144860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.145119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.145145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.145162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.148780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.157749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.158273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.158585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.158614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.158632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.158874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.159134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.159160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.159177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.162790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.171768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.172245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.172576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.172612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.172631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.172874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.173134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.173160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.173177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.176794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.185767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.186233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.186451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.186480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.186498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.186740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.186999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.187026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.187042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.190658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.199842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.200408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.200699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.200729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.200747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.201003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.201249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.201274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.201291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.204906] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.213879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.214336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.214525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.214552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.214579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.214822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.215082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.215108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.215124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.218741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.227925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.228408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.228620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.228648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.228666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.228908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.229172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.229198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.229215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.232833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.241838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.242304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.242584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.242613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.242631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.242874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.243134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.243160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.243177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.246795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.255764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.256249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.256551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.256599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.256618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.946 [2024-05-15 02:38:52.256865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.946 [2024-05-15 02:38:52.257124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.946 [2024-05-15 02:38:52.257150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.946 [2024-05-15 02:38:52.257167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.946 [2024-05-15 02:38:52.260781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.946 [2024-05-15 02:38:52.269750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.946 [2024-05-15 02:38:52.270236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.270456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.946 [2024-05-15 02:38:52.270484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.946 [2024-05-15 02:38:52.270502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.270743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 [2024-05-15 02:38:52.271003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.271030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.271046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 [2024-05-15 02:38:52.274664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.947 [2024-05-15 02:38:52.283635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.947 [2024-05-15 02:38:52.284110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.284503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.284532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.947 [2024-05-15 02:38:52.284549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.284790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2398726 Killed "${NVMF_APP[@]}" "$@" 00:22:04.947 [2024-05-15 02:38:52.285049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.285075] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.285091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.947 [2024-05-15 02:38:52.288709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2399808 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2399808 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2399808 ']' 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.947 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.947 [2024-05-15 02:38:52.297689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.947 [2024-05-15 02:38:52.298176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.298421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.298450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.947 [2024-05-15 02:38:52.298467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.298708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 [2024-05-15 02:38:52.298965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.298991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.299007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 [2024-05-15 02:38:52.302626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.947 [2024-05-15 02:38:52.311611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.947 [2024-05-15 02:38:52.312075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.312286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.312315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.947 [2024-05-15 02:38:52.312332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.312574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 [2024-05-15 02:38:52.312820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.312845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.312861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 [2024-05-15 02:38:52.316484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.947 [2024-05-15 02:38:52.325565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.947 [2024-05-15 02:38:52.326055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.326279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.326308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.947 [2024-05-15 02:38:52.326326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.326574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 [2024-05-15 02:38:52.326820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.326844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.326860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 [2024-05-15 02:38:52.330490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.947 [2024-05-15 02:38:52.338399] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:22:04.947 [2024-05-15 02:38:52.338470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.947 [2024-05-15 02:38:52.339458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.947 [2024-05-15 02:38:52.339943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.340190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.340219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.947 [2024-05-15 02:38:52.340236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.340478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 [2024-05-15 02:38:52.340724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.340748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.340764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 [2024-05-15 02:38:52.344388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.947 [2024-05-15 02:38:52.353360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.947 [2024-05-15 02:38:52.353837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.354096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.947 [2024-05-15 02:38:52.354126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:04.947 [2024-05-15 02:38:52.354144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:04.947 [2024-05-15 02:38:52.354390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:04.947 [2024-05-15 02:38:52.354641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.947 [2024-05-15 02:38:52.354666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.947 [2024-05-15 02:38:52.354682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.947 [2024-05-15 02:38:52.358338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.367433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.367918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.368147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.368176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.368200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.368442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.207 [2024-05-15 02:38:52.368688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.207 [2024-05-15 02:38:52.368713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.207 [2024-05-15 02:38:52.368729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.207 [2024-05-15 02:38:52.372356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.381319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.381796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.382019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.382049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.382067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.382309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.207 [2024-05-15 02:38:52.382555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.207 [2024-05-15 02:38:52.382579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.207 [2024-05-15 02:38:52.382595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.207 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.207 [2024-05-15 02:38:52.386219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.395406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.395896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.396150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.396180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.396198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.396439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.207 [2024-05-15 02:38:52.396685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.207 [2024-05-15 02:38:52.396709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.207 [2024-05-15 02:38:52.396725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.207 [2024-05-15 02:38:52.400347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.409323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.409777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.409971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.410002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.410020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.410267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.207 [2024-05-15 02:38:52.410514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.207 [2024-05-15 02:38:52.410538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.207 [2024-05-15 02:38:52.410554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.207 [2024-05-15 02:38:52.414368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.423337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.423831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.424087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.424117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.424134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.424376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.207 [2024-05-15 02:38:52.424621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.207 [2024-05-15 02:38:52.424645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.207 [2024-05-15 02:38:52.424661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.207 [2024-05-15 02:38:52.427067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:05.207 [2024-05-15 02:38:52.428290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.437289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.437956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.438170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.438202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.438225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.438478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.207 [2024-05-15 02:38:52.438730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.207 [2024-05-15 02:38:52.438754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.207 [2024-05-15 02:38:52.438774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.207 [2024-05-15 02:38:52.442406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.207 [2024-05-15 02:38:52.451381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.207 [2024-05-15 02:38:52.451913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.452148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.207 [2024-05-15 02:38:52.452180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.207 [2024-05-15 02:38:52.452199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.207 [2024-05-15 02:38:52.452454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.452702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.452726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.452743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.456369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.465333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.465788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.466010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.466041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.466059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.466302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.466549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.466573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.466589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.470218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.479393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.479847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.480067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.480100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.480118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.480360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.480607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.480632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.480649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.484276] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.493457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.494136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.494395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.494425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.494446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.494697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.494966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.494991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.495011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.498630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.507395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.507921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.508135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.508165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.508186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.508432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.508681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.508706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.508723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.512353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.521318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.521805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.522012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.522044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.522063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.522306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.522552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.522577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.522593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.526221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.535407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.535877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.536128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.536159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.536178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.536422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.536669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.536705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.536722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.540345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.548305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.208 [2024-05-15 02:38:52.548341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.208 [2024-05-15 02:38:52.548357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.208 [2024-05-15 02:38:52.548372] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.208 [2024-05-15 02:38:52.548384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.208 [2024-05-15 02:38:52.548455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.208 [2024-05-15 02:38:52.548511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.208 [2024-05-15 02:38:52.548515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.208 [2024-05-15 02:38:52.549314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.549751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.549954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.549984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.550002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.550244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.550491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.550515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.550531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.554156] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.563368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.564060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.564290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.564320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.564344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.564598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.208 [2024-05-15 02:38:52.564852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.208 [2024-05-15 02:38:52.564878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.208 [2024-05-15 02:38:52.564898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.208 [2024-05-15 02:38:52.568525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.208 [2024-05-15 02:38:52.577452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.208 [2024-05-15 02:38:52.578107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.578382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.208 [2024-05-15 02:38:52.578413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.208 [2024-05-15 02:38:52.578438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.208 [2024-05-15 02:38:52.578694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.209 [2024-05-15 02:38:52.578960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.209 [2024-05-15 02:38:52.578987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.209 [2024-05-15 02:38:52.579007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.209 [2024-05-15 02:38:52.582628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.209 [2024-05-15 02:38:52.591627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.209 [2024-05-15 02:38:52.592320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.209 [2024-05-15 02:38:52.592571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.209 [2024-05-15 02:38:52.592606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.209 [2024-05-15 02:38:52.592631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.209 [2024-05-15 02:38:52.592896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.209 [2024-05-15 02:38:52.593163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.209 [2024-05-15 02:38:52.593189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.209 [2024-05-15 02:38:52.593210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.209 [2024-05-15 02:38:52.596838] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.209 [2024-05-15 02:38:52.605618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.209 [2024-05-15 02:38:52.606219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.209 [2024-05-15 02:38:52.606459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.209 [2024-05-15 02:38:52.606490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.209 [2024-05-15 02:38:52.606522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.209 [2024-05-15 02:38:52.606774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.209 [2024-05-15 02:38:52.607037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.209 [2024-05-15 02:38:52.607063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.209 [2024-05-15 02:38:52.607081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.209 [2024-05-15 02:38:52.610695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.209 [2024-05-15 02:38:52.619727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.209 [2024-05-15 02:38:52.620352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.468 [2024-05-15 02:38:52.620638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.468 [2024-05-15 02:38:52.620680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.468 [2024-05-15 02:38:52.620707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.468 [2024-05-15 02:38:52.620986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.468 [2024-05-15 02:38:52.621245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.468 [2024-05-15 02:38:52.621272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.468 [2024-05-15 02:38:52.621292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.468 [2024-05-15 02:38:52.624944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.468 [2024-05-15 02:38:52.633784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.468 [2024-05-15 02:38:52.634362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.468 [2024-05-15 02:38:52.634594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.468 [2024-05-15 02:38:52.634625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.468 [2024-05-15 02:38:52.634647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.468 [2024-05-15 02:38:52.634899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.468 [2024-05-15 02:38:52.635161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.635187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.635206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.638822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 [2024-05-15 02:38:52.647798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.648267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.648516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.648546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.648565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 [2024-05-15 02:38:52.648807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 [2024-05-15 02:38:52.649065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.649090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.649107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.652731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 [2024-05-15 02:38:52.661497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.661926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.662163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.662190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.662214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 [2024-05-15 02:38:52.662447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 [2024-05-15 02:38:52.662663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.662685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.662699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.666006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.469 [2024-05-15 02:38:52.675102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.675639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.675806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.675834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.675850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 [2024-05-15 02:38:52.676079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 [2024-05-15 02:38:52.676313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.676335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.676348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.679616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 [2024-05-15 02:38:52.688748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.689156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.689321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.689348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.689365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 [2024-05-15 02:38:52.689609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 [2024-05-15 02:38:52.689818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.689839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.689852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.469 [2024-05-15 02:38:52.693156] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.469 [2024-05-15 02:38:52.697399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.469 [2024-05-15 02:38:52.702406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.702868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.469 [2024-05-15 02:38:52.703070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.703098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.703115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.469 [2024-05-15 02:38:52.703346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.469 [2024-05-15 02:38:52.703580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.703603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.703618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.706928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 [2024-05-15 02:38:52.715897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.716416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.716586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.716613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.716630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 [2024-05-15 02:38:52.716875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 [2024-05-15 02:38:52.717134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.717157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.717172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.720443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 [2024-05-15 02:38:52.729547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.469 [2024-05-15 02:38:52.730278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.730520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.469 [2024-05-15 02:38:52.730551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.469 [2024-05-15 02:38:52.730572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.469 [2024-05-15 02:38:52.730835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.469 [2024-05-15 02:38:52.731091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.469 [2024-05-15 02:38:52.731114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.469 [2024-05-15 02:38:52.731132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.469 [2024-05-15 02:38:52.734332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.469 Malloc0 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.470 [2024-05-15 02:38:52.743374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.470 [2024-05-15 02:38:52.743767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.470 [2024-05-15 02:38:52.744030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.470 [2024-05-15 02:38:52.744070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224b990 with addr=10.0.0.2, port=4420 00:22:05.470 [2024-05-15 02:38:52.744087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b990 is same with the state(5) to be set 00:22:05.470 [2024-05-15 02:38:52.744306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b990 (9): Bad file descriptor 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:05.470 [2024-05-15 02:38:52.744539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.470 [2024-05-15 02:38:52.744562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.470 [2024-05-15 02:38:52.744577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.470 [2024-05-15 02:38:52.747857] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.470 [2024-05-15 02:38:52.756079] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:05.470 [2024-05-15 02:38:52.756387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.470 [2024-05-15 02:38:52.756953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.470 02:38:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2399139 00:22:05.470 [2024-05-15 02:38:52.875169] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:15.450 00:22:15.450 Latency(us) 00:22:15.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.450 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:15.450 Verification LBA range: start 0x0 length 0x4000 00:22:15.450 Nvme1n1 : 15.01 6949.06 27.14 8466.65 0.00 8276.65 904.15 19320.98 00:22:15.450 =================================================================================================================== 00:22:15.450 Total : 6949.06 27.14 8466.65 0.00 8276.65 904.15 19320.98 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.450 rmmod nvme_tcp 00:22:15.450 rmmod nvme_fabrics 00:22:15.450 rmmod nvme_keyring 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2399808 ']' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2399808 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 2399808 ']' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 2399808 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2399808 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2399808' 00:22:15.450 killing process with pid 2399808 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 2399808 00:22:15.450 [2024-05-15 02:39:02.191642] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 2399808 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.450 02:39:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.354 02:39:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.354 00:22:17.354 real 0m23.742s 00:22:17.354 user 0m56.846s 00:22:17.354 sys 0m6.858s 00:22:17.354 02:39:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:17.354 02:39:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:17.354 ************************************ 00:22:17.354 END TEST nvmf_bdevperf 00:22:17.354 ************************************ 00:22:17.354 02:39:04 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:22:17.354 02:39:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:17.354 02:39:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:17.355 02:39:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.355 ************************************ 00:22:17.355 START TEST nvmf_target_disconnect 00:22:17.355 ************************************ 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:22:17.355 * Looking for test storage... 00:22:17.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.355 02:39:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:19.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:19.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:19.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:19.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.889 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:22:19.890 00:22:19.890 --- 10.0.0.2 ping statistics --- 00:22:19.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.890 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:19.890 00:22:19.890 --- 10.0.0.1 ping statistics --- 00:22:19.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.890 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.890 02:39:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:20.148 ************************************ 00:22:20.148 START TEST nvmf_target_disconnect_tc1 00:22:20.148 ************************************ 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:20.148 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.148 [2024-05-15 02:39:07.433713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.148 [2024-05-15 02:39:07.433984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.148 [2024-05-15 02:39:07.434012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6ad60 with addr=10.0.0.2, port=4420 00:22:20.148 [2024-05-15 02:39:07.434049] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:20.148 [2024-05-15 02:39:07.434073] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:20.148 [2024-05-15 02:39:07.434087] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:22:20.148 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:22:20.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:22:20.148 Initializing NVMe Controllers 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:22:20.148 00:22:20.148 real 0m0.104s 00:22:20.148 user 0m0.035s 00:22:20.148 sys 0m0.067s 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.148 ************************************ 00:22:20.148 END TEST nvmf_target_disconnect_tc1 00:22:20.148 ************************************ 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:20.148 ************************************ 00:22:20.148 START TEST nvmf_target_disconnect_tc2 00:22:20.148 ************************************ 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2403635 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2403635 00:22:20.148 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2403635 ']' 00:22:20.149 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.149 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:20.149 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:20.149 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.149 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:20.149 02:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.149 [2024-05-15 02:39:07.547437] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:22:20.149 [2024-05-15 02:39:07.547518] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.408 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.408 [2024-05-15 02:39:07.630311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.408 [2024-05-15 02:39:07.742945] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.408 [2024-05-15 02:39:07.742996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.408 [2024-05-15 02:39:07.743019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.408 [2024-05-15 02:39:07.743031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.408 [2024-05-15 02:39:07.743042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.408 [2024-05-15 02:39:07.743122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:20.408 [2024-05-15 02:39:07.743155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:20.408 [2024-05-15 02:39:07.743212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:20.408 [2024-05-15 02:39:07.743215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 Malloc0 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 [2024-05-15 02:39:08.533850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 [2024-05-15 02:39:08.561864] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:21.347 [2024-05-15 02:39:08.562170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=2403981 00:22:21.347 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:22:21.348 02:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:21.348 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.254 02:39:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 2403635 00:22:23.254 02:39:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 [2024-05-15 02:39:10.588297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 [2024-05-15 02:39:10.588690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Write completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.254 starting I/O failed 00:22:23.254 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Read completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 Write completed with error (sct=0, sc=8) 00:22:23.255 starting I/O failed 00:22:23.255 [2024-05-15 02:39:10.589027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:23.255 [2024-05-15 02:39:10.589258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.589438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.589466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.589760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.590038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.590066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.590245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.590454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.590478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.590707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.590961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.591005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.591172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.591590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.591644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.591988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.592163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.592191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.592411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.592674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.592701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.592948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.593120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.593147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.593324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.593518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.593543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.593813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.594029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.594055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.594234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.594421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.594446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.594623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.594821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.594846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.595049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.595226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.595252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.595409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.595633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.595658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.595860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.596086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.596112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.596285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.596514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.596539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.596764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.596937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.596963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.597129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.597333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.597376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.597625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.597821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.597847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.598033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.598215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.598256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.598460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.598685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.598710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.598920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.599106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.599132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.599330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.599488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.599513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.255 [2024-05-15 02:39:10.599705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.599941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.255 [2024-05-15 02:39:10.599977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.255 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.600164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.600321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.600345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.600554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.600873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.600915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.601108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.601328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.601371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.601570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.601830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.601854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.602029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.602238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.602264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.602520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.602724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.602752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.603023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.604386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.604427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.604651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.604921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.604956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.605129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.605347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.605390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.605636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.605822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.605850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.606047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.606223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.606248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.606418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.606615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.606640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.606811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.607002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.607028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.607203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.607380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.607406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.607598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.607794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.607820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.608015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.608198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.608224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.608395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.608557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.608583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.608750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.608969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.608996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.609163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.609356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.609382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.609575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.609764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.609790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.609953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.610146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.610173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.610370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.610557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.610587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.610795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.610987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.611014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.611179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.611345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.611370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.611562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.611736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.611763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.611959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.612152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.612182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.612347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.612514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.612539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.612739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.612940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.612966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.613135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.613327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.613352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.256 qpair failed and we were unable to recover it. 00:22:23.256 [2024-05-15 02:39:10.613547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.256 [2024-05-15 02:39:10.613712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.613738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.614000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.614171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.614198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.614411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.614622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.614666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.614829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.615000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.615026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.615185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.615396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.615421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.615647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.615884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.615910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.616088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.616310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.616360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.616669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.616898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.616924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.617101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.617307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.617334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.617602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.617846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.617871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.618046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.618208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.618233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.618420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.618661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.618703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.618926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.619107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.619133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.619331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.619550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.619579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.619764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.619937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.619963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.620138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.620311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.620337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.620551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.620769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.620814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.621011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.621202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.621227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.621480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.621713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.621756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.621935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.622110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.622136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.622308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.622513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.622540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.622770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.623021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.623047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.623223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.623404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.623432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.623691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.623903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.623936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.624108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.624274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.624301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.624532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.624741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.624766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.624992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.625163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.625203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.625395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.625571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.625612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.625836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.626000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.626028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.257 qpair failed and we were unable to recover it. 00:22:23.257 [2024-05-15 02:39:10.626228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.257 [2024-05-15 02:39:10.626447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.626472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.626666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.626872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.626897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.627097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.627314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.627357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.627570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.627807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.627832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.628000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.628176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.628202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.628397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.628739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.628787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.628964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.629135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.629163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.629382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.629595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.629638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.629834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.630032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.630059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.630258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.630428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.630455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.630630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.630851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.630877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.631049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.631239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.631265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.631476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.631727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.631752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.631954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.632158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.632184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.632407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.632711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.632739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.632954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.633125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.633152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.633324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.633548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.633574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.633767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.633943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.633970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.634173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.634369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.634412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.634634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.634823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.634848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.635040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.635244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.635272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.635484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.635749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.635774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.636003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.636195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.636220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.636411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.636602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.258 [2024-05-15 02:39:10.636646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.258 qpair failed and we were unable to recover it. 00:22:23.258 [2024-05-15 02:39:10.636842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.637038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.637065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.637260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.637473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.637501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.637720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.637887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.637912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.638130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.638311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.638337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.638562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.638796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.638822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.639047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.639258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.639301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.639542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.639780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.639823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.640099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.640306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.640331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.640511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.640731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.640758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.640958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.641160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.641186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.641408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.641612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.641655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.641992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.642164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.642190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.642386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.642593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.642636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.642834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.643025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.643050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.643229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.643422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.643465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.643678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.643886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.643926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.644144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.644371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.644413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.644663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.644900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.644925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.645133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.645336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.645378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.645631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.645840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.645867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.646098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.646284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.646309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.646506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.646701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.646743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.646939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.647150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.647175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.647388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.647678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.647703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.647928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.648143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.648168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.648338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.648562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.648613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.648832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.649053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.649080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.649296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.649484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.649526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.649798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.650036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.259 [2024-05-15 02:39:10.650079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.259 qpair failed and we were unable to recover it. 00:22:23.259 [2024-05-15 02:39:10.650284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.650516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.650544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.650753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.650923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.650979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.651207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.651492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.651517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.651699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.651966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.651992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.652229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.652484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.652540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.652746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.652959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.652986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.653176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.653436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.653460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.653659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.653872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.653896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.654112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.654317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.654358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.654594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.654797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.654821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.655012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.655214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.655239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.655433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.655587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.655611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.655801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.655964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.655991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.656175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.656396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.656425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.656651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.656898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.656945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.657196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.657417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.657442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.657660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.657854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.657879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.658176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.658485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.658528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.658769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.658968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.658995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.659167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.659354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.659379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.659599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.659904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.659953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.660151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.660340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.660364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.660564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.660758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.660784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.661058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.661239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.661267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.661485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.661663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.661687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.661905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.662148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.662191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.662417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.662677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.662718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.662872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.663082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.663108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.663332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.663524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.260 [2024-05-15 02:39:10.663549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.260 qpair failed and we were unable to recover it. 00:22:23.260 [2024-05-15 02:39:10.663745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.261 [2024-05-15 02:39:10.663943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.261 [2024-05-15 02:39:10.663968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.261 qpair failed and we were unable to recover it. 00:22:23.261 [2024-05-15 02:39:10.664126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.664400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.664451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.664670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.664894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.664919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.665146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.665344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.665388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.665611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.665832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.665858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.666046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.666275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.666318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.666504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.666736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.666762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.667033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.667275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.667303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.667579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.667764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.667789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.668002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.668365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.668423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.668649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.668840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.668865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.669050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.669236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.669278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.669470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.669709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.669751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.669947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.670198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.670226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.670441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.670739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.670765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.670960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.671137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.671162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.671331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.671531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.671556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.671767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.672016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.672060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.672289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.672528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.672553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.672745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.672940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.672966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.673193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.673474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.673519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.673678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.673875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.673900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.674076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.674288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.674332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.674533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.674741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.674765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.674928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.675138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.675163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.532 [2024-05-15 02:39:10.675394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.675613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.532 [2024-05-15 02:39:10.675638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.532 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.675829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.676055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.676083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.676331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.676575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.676601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.676817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.677012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.677039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.677257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.677460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.677504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.677764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.677943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.677970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.678192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.678425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.678469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.678702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.678921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.678962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.679181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.679416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.679441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.679659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.679872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.679897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.680110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.680306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.680331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.680493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.680692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.680721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.680944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.681200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.681242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.681489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.681719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.681747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.681956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.682192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.682244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.682458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.682666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.682709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.682933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.683153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.683179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.683402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.683559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.683584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.683797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.684007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.684033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.684278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.684582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.684641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.684836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.685074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.685104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.685374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.685749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.685806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.686019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.686330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.686383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.686646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.686822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.686847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.687069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.687315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.687341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.687529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.687752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.687794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.688038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.688278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.533 [2024-05-15 02:39:10.688304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.533 qpair failed and we were unable to recover it. 00:22:23.533 [2024-05-15 02:39:10.688498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.688708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.688734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.688940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.689162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.689204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.689424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.689668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.689693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.689918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.690119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.690162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.690414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.690626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.690673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.690845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.691042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.691069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.691279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.691569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.691594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.691781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.691943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.691969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.692196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.692487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.692513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.692730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.692939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.692966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.693161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.693374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.693400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.693597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.693762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.693787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.694032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.694267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.694297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.694505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.694717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.694742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.694937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.695209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.695239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.695432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.695654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.695681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.695852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.696074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.696118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.696316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.696586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.696629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.696797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.697033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.697062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.697327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.697576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.697620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.697789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.697972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.698002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.698245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.698471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.698497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.698691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.698891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.698916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.699163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.699413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.699438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.699660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.699852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.699878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.534 [2024-05-15 02:39:10.700107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.700352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.534 [2024-05-15 02:39:10.700378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.534 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.700620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.700831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.700855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.701071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.701339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.701381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.701589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.701802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.701827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.702044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.702306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.702332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.702549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.702756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.702782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.702951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.703172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.703196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.703379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.703731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.703781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.704000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.704233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.704275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.704507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.704670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.704695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.704892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.705099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.705128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.705401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.705572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.705599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.705820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.706067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.706111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.706363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.706692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.706749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.706972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.707166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.707208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.707425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.707678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.707732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.707913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.708142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.708184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.708408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.708651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.708675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.708844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.709014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.709041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.709291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.709586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.709635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.709842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.710038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.710081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.710334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.710562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.710588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.710838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.711051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.711094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.711290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.711489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.711531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.711730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.711920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.535 [2024-05-15 02:39:10.711951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.535 qpair failed and we were unable to recover it. 00:22:23.535 [2024-05-15 02:39:10.712159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.712350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.712375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.712699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.712902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.712938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.713164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.713464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.713507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.713769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.713979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.714005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.714232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.714473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.714515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.714768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.715005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.715049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.715267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.715554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.715598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.715804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.716053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.716096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.716358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.716695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.716744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.716987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.717167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.717194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.717412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.717611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.717636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.717826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.718029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.718055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.718278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.718587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.718636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.718825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.719072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.719115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.719341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.719645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.719689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.719903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.720129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.720171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.720373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.720568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.720612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.720851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.721028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.721055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.721276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.721538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.721581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.721785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.722009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.722035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.722217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.722431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.722473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.722757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.723038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.723064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.536 [2024-05-15 02:39:10.723316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.723567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.536 [2024-05-15 02:39:10.723595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.536 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.723880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.724120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.724147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.724333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.724547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.724589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.724854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.725117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.725143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.725369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.725633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.725676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.725885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.726116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.726162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.726428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.726647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.726689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.726890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.727069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.727095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.727291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.727523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.727565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.727796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.727980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.728006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.728222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.728453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.728497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.728714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.728926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.728959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.729192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.729398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.729441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.729710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.729900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.729926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.730154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.730376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.730401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.730616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.730875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.730900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.537 qpair failed and we were unable to recover it. 00:22:23.537 [2024-05-15 02:39:10.731148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.537 [2024-05-15 02:39:10.731482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.731527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.731756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.731923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.731968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.732237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.732479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.732505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.732740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.732949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.732975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.733174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.733390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.733433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.733646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.733888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.733937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.734151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.734371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.734396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.734673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.734916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.734948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.735107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.735328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.735371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.735597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.735839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.735865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.736035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.736295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.736345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.736552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.736777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.736800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.737000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.737268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.737330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.737552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.737789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.737814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.738005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.738209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.738234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.738463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.738657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.738682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.738908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.739109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.739151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.739396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.739634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.739676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.739876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.740092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.740138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.740369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.740670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.740724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.740953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.741152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.741194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.741421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.741635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.741660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.741856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.742100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.742144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.742337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.742502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.742527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.742697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.742890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.742916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.743177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.743407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.538 [2024-05-15 02:39:10.743449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.538 qpair failed and we were unable to recover it. 00:22:23.538 [2024-05-15 02:39:10.743641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.743853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.743878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.744102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.744342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.744367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.744554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.744797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.744823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.745014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.745258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.745284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.745506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.745696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.745746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.745944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.746113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.746137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.746338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.746496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.746521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.746735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.746946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.746978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.747194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.747457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.747508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.747736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.747924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.747955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.748158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.748348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.748373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.748624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.748835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.748861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.749031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.749225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.749267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.749539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.749821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.749873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.750044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.750289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.750314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.750559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.750780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.750806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.750982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.751214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.751243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.751522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.751728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.751753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.751953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.752183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.752209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.752457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.752691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.752719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.752950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.753143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.753169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.753389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.753577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.753601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.753837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.753997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.754023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.754218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.754445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.754487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.754705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.754945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.754972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.755179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.755411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.755454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.755686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.755875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.755900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.756099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.756361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.756387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.756605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.756831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.756858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.539 qpair failed and we were unable to recover it. 00:22:23.539 [2024-05-15 02:39:10.757109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.539 [2024-05-15 02:39:10.757347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.757376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.757621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.757839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.757865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.758088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.758308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.758337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.758580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.758763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.758788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.758953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.759176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.759224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.759410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.759640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.759683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.759920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.760116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.760140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.760297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.760501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.760527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.760715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.760907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.760947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.761113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.761362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.761405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.761658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.761857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.761883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.762080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.762424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.762479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.762697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.762916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.762955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.763123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.763360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.763413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.763636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.763845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.763871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.764070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.764300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.764354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.764571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.764804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.764829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.765044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.765253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.765296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.765523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.765689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.765714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.765879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.766100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.766146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.766407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.766603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.766627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.766791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.767035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.767079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.767310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.767538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.767585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.767750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.767944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.767970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.768181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.768460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.768511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.768757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.768969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.768995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.769159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.769334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.769362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.769623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.769836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.769862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.540 qpair failed and we were unable to recover it. 00:22:23.540 [2024-05-15 02:39:10.770033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.540 [2024-05-15 02:39:10.770251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.770277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.770498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.770811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.770860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.771084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.771349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.771397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.771666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.771864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.771889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.772096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.772290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.772319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.772512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.772808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.772834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.773030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.773243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.773285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.773475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.773690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.773733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.773927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.774152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.774178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.774392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.774617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.774659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.774853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.775077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.775103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.775296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.775555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.775581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.775801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.776011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.776054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.776248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.776514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.776556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.776748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.776969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.776996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.777219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.777445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.777488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.777737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.777948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.777974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.778196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.778457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.778500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.778692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.778904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.778936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.779129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.779407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.779456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.779641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.779877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.779903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.780104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.780302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.780327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.780513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.780785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-05-15 02:39:10.780828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.541 qpair failed and we were unable to recover it. 00:22:23.541 [2024-05-15 02:39:10.781072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.781283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.781326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.781508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.781730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.781757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.781952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.782212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.782241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.782504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.782689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.782715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.782941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.783165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.783208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.783455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.783708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.783759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.783971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.784137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.784162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.784407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.784626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.784651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.784811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.785033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.785060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.785262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.785492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.785535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.785762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.785973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.786000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.786228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.786463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.786506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.786731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.786945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.786978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.787231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.787496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.787539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.787761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.787951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.787976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.542 [2024-05-15 02:39:10.788165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.788433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.542 [2024-05-15 02:39:10.788477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.542 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.788645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.788864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.788889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.789098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.789298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.789323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.789544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.789786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.789811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.790003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.790244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.790286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.790495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.790683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.790709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.790935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.791154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.791199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.791427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.791658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.791701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.791927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.792119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.792145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.792388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.792641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.792701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.792927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.793128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.793153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.793402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.793668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.793693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.793880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.794075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.794101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.794349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.794579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.794621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.794845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.795058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.795084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.795296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.795531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.795583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.795749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.795969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.795995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.796233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.796398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.796423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.796586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.796806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.796832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.797027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.797193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.797219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.797412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.797640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.797666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.797863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.798022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.798048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.798268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.798460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.798485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.543 qpair failed and we were unable to recover it. 00:22:23.543 [2024-05-15 02:39:10.798641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.543 [2024-05-15 02:39:10.798860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.798885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.799108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.799375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.799429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.799694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.799887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.799913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.800153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.800319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.800343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.800515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.800737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.800763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.800970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.801158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.801186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.801386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.801626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.801651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.801853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.802078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.802104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.802327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.802552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.802578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.802742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.802964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.802989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.803179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.803339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.803364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.803556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.803754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.803779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.803976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.804139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.804166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.804392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.804697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.804740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.804990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.805226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.805270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.805523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.805744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.805769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.805940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.806137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.806162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.806380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.806605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.806650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.806869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.807061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.807087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.807299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.807540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.807582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.807803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.808010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.808052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.808300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.808530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.808572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.544 qpair failed and we were unable to recover it. 00:22:23.544 [2024-05-15 02:39:10.808838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.544 [2024-05-15 02:39:10.809038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.809064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.809267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.809467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.809492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.809688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.809861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.809886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.810124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.810309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.810351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.810537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.810745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.810770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.810974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.811212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.811256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.811528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.811725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.811769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.811940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.812103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.812130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.812343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.812653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.812707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.812877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.813077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.813121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.813338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.813580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.813623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.813789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.813964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.813991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.814239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.814436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.814463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.814651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.814888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.814914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.815147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.815385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.815410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.815622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.815833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.815857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.816054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.816262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.816305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.816528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.816753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.816796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.817020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.817217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.817258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.817505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.817748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.817800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.818044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.818259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.818303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.818519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.818731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.818756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.818951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.819172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.819215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.819430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.819669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.819711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.545 qpair failed and we were unable to recover it. 00:22:23.545 [2024-05-15 02:39:10.819900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.545 [2024-05-15 02:39:10.820067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.820094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.820314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.820541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.820584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.820805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.821052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.821098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.821313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.821565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.821592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.821759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.821987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.822014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.822240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.822538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.822589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.822810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.823023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.823068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.823256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.823529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.823555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.823779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.824002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.824032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.824267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.824506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.824550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.824737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.824962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.824989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.825202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.825447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.825504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.825750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.825963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.825990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.826226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.826451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.826493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.826747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.826959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.826986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.827183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.827434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.827476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.827690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.827891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.827915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.828118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.828309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.828352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.828623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.828795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.828820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.828990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.829239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.829282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.829530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.829804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.829847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.830095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.830342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.830368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.830585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.830822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.830848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.546 qpair failed and we were unable to recover it. 00:22:23.546 [2024-05-15 02:39:10.831041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.831236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.546 [2024-05-15 02:39:10.831261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.831444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.831714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.831740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.831936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.832108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.832134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.832322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.832555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.832598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.832817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.833032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.833059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.833280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.833510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.833557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.833777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.834003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.834047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.834275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.834508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.834551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.834747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.834943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.834970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.835164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.835401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.835443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.835662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.835902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.835928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.836099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.836321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.836363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.836547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.836797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.836840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.837046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.837288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.837337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.837557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.837783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.837827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.838051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.838305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.838370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.838621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.838830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.838855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.839075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.839288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.839332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.839582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.839799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.839826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.840021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.840260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.840304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.840529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.840747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.840772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.547 [2024-05-15 02:39:10.840942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.841141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.547 [2024-05-15 02:39:10.841167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.547 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.841422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.841654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.841696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.841888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.842067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.842094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.842317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.842542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.842568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.842746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.842942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.842973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.843139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.843409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.843435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.843661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.843876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.843901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.844152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.844365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.844407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.844592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.844800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.844825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.844999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.845253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.845296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.845469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.845699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.845724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.845914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.846115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.846140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.846386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.846594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.846635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.846831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.847052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.847079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.847273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.847477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.847527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.847758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.847914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.847946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.848142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.848496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.848545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.848762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.849034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.849079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.548 qpair failed and we were unable to recover it. 00:22:23.548 [2024-05-15 02:39:10.849331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.548 [2024-05-15 02:39:10.849525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.849552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.849747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.849938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.849965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.850158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.850367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.850411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.850612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.850849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.850874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.851123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.851326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.851368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.851563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.851749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.851775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.851976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.852160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.852204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.852423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.852621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.852663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.852826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.853012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.853056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.853281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.853532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.853578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.853766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.853937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.853963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.854121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.854348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.854390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.854638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.854817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.854844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.855053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.855337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.855387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.855604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.855817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.855843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.856051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.856262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.856305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.856514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.856752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.856795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.856961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.857213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.857255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.857478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.857686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.857727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.857944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.858129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.858172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.858390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.858619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.858662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.858876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.859053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.859080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.859275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.859527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.859569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.859800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.859966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.859993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.860224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.860546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.860603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.860771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.860993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.861020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.861242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.861466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.861509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.861715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.861926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.861958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.549 qpair failed and we were unable to recover it. 00:22:23.549 [2024-05-15 02:39:10.862124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.549 [2024-05-15 02:39:10.862342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.862384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.862566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.862826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.862870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.863040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.863258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.863301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.863503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.863740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.863768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.864010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.864251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.864279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.864518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.864741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.864766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.864962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.865193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.865236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.865430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.865669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.865713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.865905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.866103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.866129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.866352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.866756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.866801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.867008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.867228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.867270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.867501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.867761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.867787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.867990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.868187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.868213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.868407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.868680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.868723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.868946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.869162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.869209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.869395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.869635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.869677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.869877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.870069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.870096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.870261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.870462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.870505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.870725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.870935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.870962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.871147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.871392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.871436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.871651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.871862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.871887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.872055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.872314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.872359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.872571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.872775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.872801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.872978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.873243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.873303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.873488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.873663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.873689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.873883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.874121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.874167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.874395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.874630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.874672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.874844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.875101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.875146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.875348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.875619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.875661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.550 [2024-05-15 02:39:10.875885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.876058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.550 [2024-05-15 02:39:10.876084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.550 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.876318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.876568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.876610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.876835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.877035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.877061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.877283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.877497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.877538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.877787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.878004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.878049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.878303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.878721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.878776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.878980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.879178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.879204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.879394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.879599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.879644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.879863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.880033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.880059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.880274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.880577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.880622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.880796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.881007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.881052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.881301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.881529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.881573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.881778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.881947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.881974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.882162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.882356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.882380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.882550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.882745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.882770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.882963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.883193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.883217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.883463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.883625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.883655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.883883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.884079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.884106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.884332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.884560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.884602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.884795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.884990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.885017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Read completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 Write completed with error (sct=0, sc=8) 00:22:23.551 starting I/O failed 00:22:23.551 [2024-05-15 02:39:10.885417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:23.551 [2024-05-15 02:39:10.885654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.885887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.885915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b48000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.886165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.886337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.886365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b48000b90 with addr=10.0.0.2, port=4420 00:22:23.551 qpair failed and we were unable to recover it. 00:22:23.551 [2024-05-15 02:39:10.886610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.551 [2024-05-15 02:39:10.886852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.886878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b48000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.887088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.887288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.887315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b48000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.887486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.887692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.887719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b48000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.887918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.888103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.888131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.888317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.888608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.888676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.888890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.889114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.889140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.889331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.889590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.889650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.889892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.890129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.890156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.890350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.890548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.890591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.890761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.890964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.891002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.891203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.891486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.891529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.891729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.891962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.892000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.892196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.892430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.892457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.892729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.892995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.893025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.893246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.893431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.893459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.893666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.893854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.893881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.894106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.894326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.894353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.894560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.894740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.894768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.894949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.895161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.895186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.895436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.895697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.895743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.896011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.896182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.896209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.896435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.896648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.896689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.896900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.897074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.897100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.897290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.897504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.897532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.897742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.897959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.897985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.898172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.898421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.898448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.898688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.898869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.898897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.899121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.899300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.899327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.899566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.899788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.899817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.552 [2024-05-15 02:39:10.900053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.900216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.552 [2024-05-15 02:39:10.900242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.552 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.900433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.900616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.900646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.900863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.901077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.901105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.901318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.901536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.901561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.901805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.901985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.902013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.902258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.902475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.902504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.902717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.902927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.902965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.903172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.903378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.903405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.903626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.903873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.903899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.904133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.904343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.904371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.904609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.904835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.904860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.905056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.905249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.905274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.905463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.905701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.905729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.905943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.906126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.906154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.906399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.906590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.906618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.906859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.907088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.907118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.907362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.907558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.907583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.907774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.907963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.907993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.908186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.908427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.908454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.908692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.908892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.908917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.909101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.909287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.909315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.909499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.909711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.909741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.909928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.910152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.910181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.910420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.910630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.910658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.553 [2024-05-15 02:39:10.910836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.911050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.553 [2024-05-15 02:39:10.911087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.553 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.911277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.911466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.911491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.911687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.911883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.911908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.912106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.912294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.912319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.912476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.912692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.912717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.912942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.913130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.913157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.913326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.913482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.913507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.913720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.913897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.913924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.914135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.914342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.914370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.914610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.914819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.914843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.915037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.915251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.915283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.915522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.915693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.915718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.915913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.916095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.916123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.916337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.916579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.916607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.916852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.917067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.917096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.917338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.917577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.917605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.917788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.917999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.918027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.918211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.918414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.918441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.918645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.918803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.918828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.919044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.919274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.919301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.919543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.919758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.919790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.920002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.920203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.920231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.920441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.920609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.920634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.920816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.920998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.921027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.921220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.921385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.921426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.921669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.921858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.921883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.922078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.922296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.922324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.554 qpair failed and we were unable to recover it. 00:22:23.554 [2024-05-15 02:39:10.922535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.554 [2024-05-15 02:39:10.922745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.922774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.923012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.923214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.923240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.923454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.923670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.923695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.923910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.924137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.924171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.924419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.924686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.924730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.924963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.925181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.925210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.925577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.925977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.926006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.926244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.926459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.926487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.926724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.926908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.926942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.927134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.927347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.927374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.927576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.927792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.927819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.928035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.928208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.928234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.928414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.928660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.928688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.928928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.929146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.929173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.929358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.929572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.929597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.555 qpair failed and we were unable to recover it. 00:22:23.555 [2024-05-15 02:39:10.929772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.555 [2024-05-15 02:39:10.929944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.929970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.930185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.930369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.930396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.930646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.930860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.930888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.931106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.931313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.931341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.931558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.931778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.931805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.932024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.932241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.932269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.932493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.932715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.932744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.932963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.933153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.933177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.556 [2024-05-15 02:39:10.933395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.933596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.556 [2024-05-15 02:39:10.933623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.556 qpair failed and we were unable to recover it. 00:22:23.557 [2024-05-15 02:39:10.933813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.557 [2024-05-15 02:39:10.934033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.557 [2024-05-15 02:39:10.934063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.934257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.934493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.934520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.934815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.935057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.935085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.935323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.935540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.935568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.935801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.935993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.936022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.936246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.936451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.936479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.936724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.936940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.936969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.937148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.937433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.937462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.937667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.937875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.937903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.938114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.938338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.938366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.938646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.938857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.938884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.939097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.939283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.939311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.939520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.939759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.939786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.940035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.940281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.940309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.940623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.940895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.940920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.941120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.941332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.941360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.941585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.941769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.941794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.941965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.942160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.942185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.942378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.942605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.942631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.942852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.943070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.943100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.943358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.943573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.943602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.943790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.944037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.944063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.944245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.944460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.944485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.944675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.944924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.944959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.945191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.945435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.945460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.945653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.945867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.945895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.946087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.946302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.946330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.946536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.946765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.827 [2024-05-15 02:39:10.946792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.827 qpair failed and we were unable to recover it. 00:22:23.827 [2024-05-15 02:39:10.947003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.947217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.947244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.947477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.947710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.947737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.947923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.948118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.948146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.948364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.948546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.948575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.948790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.949001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.949027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.949274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.949481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.949527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.949763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.949985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.950014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.950200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.950443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.950471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.950692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.950883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.950907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.951131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.951357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.951385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.951590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.951788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.951818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.952031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.952250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.952278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.952474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.952686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.952716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.952903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.953092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.953121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.953365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.953557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.953583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.953772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.953989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.954019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.954258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.954476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.954500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.954714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.954908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.954938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.955132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.955355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.955384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.955595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.955850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.955879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.956093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.956284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.956313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.956508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.956689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.956717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.956956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.957151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.957179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.957365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.957519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.957560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.957810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.958007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.958033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.958226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.958413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.958441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.958658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.958864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.958892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.959147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.959309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.959334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.959495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.959720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.959750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.959938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.960155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.960183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.960365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.960573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.960601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.960811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.961025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.961054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.961263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.961529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.961558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.961764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.961996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.962040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.962273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.962518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.962546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.962791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.963056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.963084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.963336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.963512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.963540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.963791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.964012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.964043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.964235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.964564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.964616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.964860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.965073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.965103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.828 qpair failed and we were unable to recover it. 00:22:23.828 [2024-05-15 02:39:10.965343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.828 [2024-05-15 02:39:10.965590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.965615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.965813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.966050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.966078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.966341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.966579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.966604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.966821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.967040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.967070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.967290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.967505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.967533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.967777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.968023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.968053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.968412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.968745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.968771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.968940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.969137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.969163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.969413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.969626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.969651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.969818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.969987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.970015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.970230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.970431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.970459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.970682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.970870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.970897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.971078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.971245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.971281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.971501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.971759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.971785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.971985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.972154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.972179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.972395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.972611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.972646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.972885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.973103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.973129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.973347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.973731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.973784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.974043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.974285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.974351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.974563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.974819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.974868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.975071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.975284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.975317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.975563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.975782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.975810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.976010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.976207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.976247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.976466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.976773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.976833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.977046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.977216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.977242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.977469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.977679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.977704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.977948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.978161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.978186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.978380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.978641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.978669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.978881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.979083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.979109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.979304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.979549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.979574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.979756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.979919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.979955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.980150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.980367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.980410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.980652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.980863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.980890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.981146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.981554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.981606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.982016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.982209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.982237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.982422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.982640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.982668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.982880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.983102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.983127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.983373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.983554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.829 [2024-05-15 02:39:10.983582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.829 qpair failed and we were unable to recover it. 00:22:23.829 [2024-05-15 02:39:10.983801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.984056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.984082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.984287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.984650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.984698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.984941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.985153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.985180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.985391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.985721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.985769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.985994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.986202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.986230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.986495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.986778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.986807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.987048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.987254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.987283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.987501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.987810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.987860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.988083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.988305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.988333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.988516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.988871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.988918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.989137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.989405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.989457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.989662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.989847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.989875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.990111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.990341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.990392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.990720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.990927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.990959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.991171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.991480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.991534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.991746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.991968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.991998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.992185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.992401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.992429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.992646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.992889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.992917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.993132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.993449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.993505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.993714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.993888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.993916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.994129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.994338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.994362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.994556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.994796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.994823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.995051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.995282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.995336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.995554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.995847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.995901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.996130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.996298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.996336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.996530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.996831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.996864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.997111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.997328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.997357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.997606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.997822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.997850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.998090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.998306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.830 [2024-05-15 02:39:10.998333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.830 qpair failed and we were unable to recover it. 00:22:23.830 [2024-05-15 02:39:10.998549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:10.998744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:10.998784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:10.998983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:10.999170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:10.999196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:10.999436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:10.999755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:10.999802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.000043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.000254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.000315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.000531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.000708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.000737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.000953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.001135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.001160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.001360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.001576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.001609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.001825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.002011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.002040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.002259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.002503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.002532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.002737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.002982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.003008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.003175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.003356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.003384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.003607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.003860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.003887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.004126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.004324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.004349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.004542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.004777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.004804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.005031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.005238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.005266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.005503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.005726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.005776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.005998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.006217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.006244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.006465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.006683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.006711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.006935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.007145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.007172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.007571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.007808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.007836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.008032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.008261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.008322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.008548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.008732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.008757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.008968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.009225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.009249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.009581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.009916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.009973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.010177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.010454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.010502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.010738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.010954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.010983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.011160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.011391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.011442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.011851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.012110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.012138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.012383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.012574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.012602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.012851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.013065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.013090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.013309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.013484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.013516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.013848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.014136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.014162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.014427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.014648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.014676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.014923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.015149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.015177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.015389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.015650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.015702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.015944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.016127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.831 [2024-05-15 02:39:11.016153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.831 qpair failed and we were unable to recover it. 00:22:23.831 [2024-05-15 02:39:11.016347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.016564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.016591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.016800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.016994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.017019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.017228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.017429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.017456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.017647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.017882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.017909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.018155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.018344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.018372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.018582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.018866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.018918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.019180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.019461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.019513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.019873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.020122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.020151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.020359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.020567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.020595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.020802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.021018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.021043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.021338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.021545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.021569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.021759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.021994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.022020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.022193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.022404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.022431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.022635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.022862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.022886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.023089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.023311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.023341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.023679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.023944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.023970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.024189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.024375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.024403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.024612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.024889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.024916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.025163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.025459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.025508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.025765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.025981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.026010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.026209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.026673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.026734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.027001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.027215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.027249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.027453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.027772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.027823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.028062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.028262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.028289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.028479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.028708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.028759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.028996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.029204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.029232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.029449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.029787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.029838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.030060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.030228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.030253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.030453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.030662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.030693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.030912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.031105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.031133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.031366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.031608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.031638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.031825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.032053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.032082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.032306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.032565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.032597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.032837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.033055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.033081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.033268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.033504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.033532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.033716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.033957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.033985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.034180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.034343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.034370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.034554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.034759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.034810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.035016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.035217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.035242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.832 qpair failed and we were unable to recover it. 00:22:23.832 [2024-05-15 02:39:11.035487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.832 [2024-05-15 02:39:11.035708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.035732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.035951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.036191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.036218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.036452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.036773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.036823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.037069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.037376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.037426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.037763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.038012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.038042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.038252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.038442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.038467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.038688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.038902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.038935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.039157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.039407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.039459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.039708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.039890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.039915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.040113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.040360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.040389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.040578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.040957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.041006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.041226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.041467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.041506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.041774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.041987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.042016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.042275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.042490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.042517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.042758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.042969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.042997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.043188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.043424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.043450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.043621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.043776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.043801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.044092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.044443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.044494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.044789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.045029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.045058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.045257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.045448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.045476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.045740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.045976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.046005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.046204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.046462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.046487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.046737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.046985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.047011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.047232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.047606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.047658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.047862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.048080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.048109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.048321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.048589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.048641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.048878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.049128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.049156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.049359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.049611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.049635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.049836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.050121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.050147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.050342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.050581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.050608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.050847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.051058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.051088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.051264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.051500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.051525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.051757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.051968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.051997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.052236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.052571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.052634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.052851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.053077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.053103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.053397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.053581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.053607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.053786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.054066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.054092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.054308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.054628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.054652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.833 qpair failed and we were unable to recover it. 00:22:23.833 [2024-05-15 02:39:11.054885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.833 [2024-05-15 02:39:11.055144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.055171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.055368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.055622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.055649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.055871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.056153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.056181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.056406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.056572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.056597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.056824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.057062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.057088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.057278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.057473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.057500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.057721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.057912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.057943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.058135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.058330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.058355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.058634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.058843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.058870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.059113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.059454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.059514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.059754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.059969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.059997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.060213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.060524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.060585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.060895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.061084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.061112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.061329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.061550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.061575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.061888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.062155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.062184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.062399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.062595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.062620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.062817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.063028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.063057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.063236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.063476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.063501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.063742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.063940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.063970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.064156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.064395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.064423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.064632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.064891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.064954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.065160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.065369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.065398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.065716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.065991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.066017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.066212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.066395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.066419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.066610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.066915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.066982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.067199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.067410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.067438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.067721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.067903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.067928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.068157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.068437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.068464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.068646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.068855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.068883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.834 [2024-05-15 02:39:11.069106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.069316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.834 [2024-05-15 02:39:11.069343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.834 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.069637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.069855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.069881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.070115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.070338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.070403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.070592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.070832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.070860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.071097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.071290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.071315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.071618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.071876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.071904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.072146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.072493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.072547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.072760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.072995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.073020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.073229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.073477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.073507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.073750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.073940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.073968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.074208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.074394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.074418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.074601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.074824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.074849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.075098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.075348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.075389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.075560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.075819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.075869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.076090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.076296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.076321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.076577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.076829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.076887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.077105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.077298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.077326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.077547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.077721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.077753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.077974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.078135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.078161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.078357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.078635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.078662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.078839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.079025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.079054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.079257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.079465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.079492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.079727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.079940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.079969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.080194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.080445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.080495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.080784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.081035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.081063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.081256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.081473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.081501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.081703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.081947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.081975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.082218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.082393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.082422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.082660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.082895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.082923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.083157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.083434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.083484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.083745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.084020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.084049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.084263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.084521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.084576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.084787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.085058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.085086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.085296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.085508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.085536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.085725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.085943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.085972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.086207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.086397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.086436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.835 qpair failed and we were unable to recover it. 00:22:23.835 [2024-05-15 02:39:11.086621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.086805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.835 [2024-05-15 02:39:11.086832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.087040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.087281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.087309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.087511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.087714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.087740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.088063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.088288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.088316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.088557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.088874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.088923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.089139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.089335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.089386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.089622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.089841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.089869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.090089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.090294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.090318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.090522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.090868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.090924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.091117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.091357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.091385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.091602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.091816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.091844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.092022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.092258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.092286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.092505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.092815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.092839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.093105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.093316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.093345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.093565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.093827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.093855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.094041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.094256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.094283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.094463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.094673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.094703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.094877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.095098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.095124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.095324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.095540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.095568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.095746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.095950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.095977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.096189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.096491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.096543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.096816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.097049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.097075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.097296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.097476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.097516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.097783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.098033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.098062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.098246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.098518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.098567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.098863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.099095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.099124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.099337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.099592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.099642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.099881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.100097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.100123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.100304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.100585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.100634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.100873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.101089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.101117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.101343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.101591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.101619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.101859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.102043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.102073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.102289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.102690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.102742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.102961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.103147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.103175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.103464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.103745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.103770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.103992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.104200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.104228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.104436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.104615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.104643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.104902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.105222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.105252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.105514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.105821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.105870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.106082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.106249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.106290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.836 qpair failed and we were unable to recover it. 00:22:23.836 [2024-05-15 02:39:11.106508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.106822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.836 [2024-05-15 02:39:11.106880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.107123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.107522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.107582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.107848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.108063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.108095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.108335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.108752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.108802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.109013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.109250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.109277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.109452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.109628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.109652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.109877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.110098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.110127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.110371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.110613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.110663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.110882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.111114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.111143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.111363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.111573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.111621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.111891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.112140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.112169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.112387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.112566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.112590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.112850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.113045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.113074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.113419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.113722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.113745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.113942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.114155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.114182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.114393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.114822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.114874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.115124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.115330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.115355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.115653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.115864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.115892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.116121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.116294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.116333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.116536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.116864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.116919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.117111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.117355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.117382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.117766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.118035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.118063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.118291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.118657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.118721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.118907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.119136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.119163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.119360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.119617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.119641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.119854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.120045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.120072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.120264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.120487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.120512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.120701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.120950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.120979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.121190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.121372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.121402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.121656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.121878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.121905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.122132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.122350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.122378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.122549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.122795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.122860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.837 qpair failed and we were unable to recover it. 00:22:23.837 [2024-05-15 02:39:11.123091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.123308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.837 [2024-05-15 02:39:11.123336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.123686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.123969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.123999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.124202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.124525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.124575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.124815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.125064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.125090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.125251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.125407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.125449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.125800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.126052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.126081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.126293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.126540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.126567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.126790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.127046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.127072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.127271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.127464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.127493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.127710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.127918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.127951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.128170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.128367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.128395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.128613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.128821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.128849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.129069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.129304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.129367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.129725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.129958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.129987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.130201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.130446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.130473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.130708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.130895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.130922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.131119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.131379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.131403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.131664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.131899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.131927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.132179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.132562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.132624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.132829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.133075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.133101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.133334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.133563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.133587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.133816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.134058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.134091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.134298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.134454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.134479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.134657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.134825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.134867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.135104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.135330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.135358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.135597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.135782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.135806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.136044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.136269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.136296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.136501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.136843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.136892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.137147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.137482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.137545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.137855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.138073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.138101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.138319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.138539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.138569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.138779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.138986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.139015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.139255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.139691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.139747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.139983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.140226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.140254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.140473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.140670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.140700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.838 qpair failed and we were unable to recover it. 00:22:23.838 [2024-05-15 02:39:11.140924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.838 [2024-05-15 02:39:11.141148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.141173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.141380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.141544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.141568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.141801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.141985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.142014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.142219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.142384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.142410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.142615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.142860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.142888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.143095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.143312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.143337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.143545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.143785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.143824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.144036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.144196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.144220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.144483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.144717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.144743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.144988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.145204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.145232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.145534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.145772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.145832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.146060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.146244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.146272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.146478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.146835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.146897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.147143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.147338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.147363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.147556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.147737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.147762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.147992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.148161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.148186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.148393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.148602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.148627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.148895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.149128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.149153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.149417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.149593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.149621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.149835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.150023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.150052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.150240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.150477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.150506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.150722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.150905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.150945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.151139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.151349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.151374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.151579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.151881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.151939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.152150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.152400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.152425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.152648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.152828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.152856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.153083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.153293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.153321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.153549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.153717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.153744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.153928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.154119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.154148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.154345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.154650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.154704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.154957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.155151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.155179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.155429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.155615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.155643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.155882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.156098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.156126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.156350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.156562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.156590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.156798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.156987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.157017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.157203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.157371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.839 [2024-05-15 02:39:11.157411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.839 qpair failed and we were unable to recover it. 00:22:23.839 [2024-05-15 02:39:11.157605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.157819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.157844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.158073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.158326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.158386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.158674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.158884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.158912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.159149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.159374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.159399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.159565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.159758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.159784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.160040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.160197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.160239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.160501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.160703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.160731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.160956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.161141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.161165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.161385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.161587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.161614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.161826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.162068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.162097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.162287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.162504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.162565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.162852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.163065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.163095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.163322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.163644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.163697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.163943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.164157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.164184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.164524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.164811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.164840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.165061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.165324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.165375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.165631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.165842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.165866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.166101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.166289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.166317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.166562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.166812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.166874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.167095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.167319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.167346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.167582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.167793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.167820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.167999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.168239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.168296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.168542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.168956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.169001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.169212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.169453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.169480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.169718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.169927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.169959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.170167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.170498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.170554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.170878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.171157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.171185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.171392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.171693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.171720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.171955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.172173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.172201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.172414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.172643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.172668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.172885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.173126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.173166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.173385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.173606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.173633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.173924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.174083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.174108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.174295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.174534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.174561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.174864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.175119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.175147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.175383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.175573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.175600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.175845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.176093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.176121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.176333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.176638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.176666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.176876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.177092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.177120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.177327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.177649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.177698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.840 qpair failed and we were unable to recover it. 00:22:23.840 [2024-05-15 02:39:11.177940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.840 [2024-05-15 02:39:11.178149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.178177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.178386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.178602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.178629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.178804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.178995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.179023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.179301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.179514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.179541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.179781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.180089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.180117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.180326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.180620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.180647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.180879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.181054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.181082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.181301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.181523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.181547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.181777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.182022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.182050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.182232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.182445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.182499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.182839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.183073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.183099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.183312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.183469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.183494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.183651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.183871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.183904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.184105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.184291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.184319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.184652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.184890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.184920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.185169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.185417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.185468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.185680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.185921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.185955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.186144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.186333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.186362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.186789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.187029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.187058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.187262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.187455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.187479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.187664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.187882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.187906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.188159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.188467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.188494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.188780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.189019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.189053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.189258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.189549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.189604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.189824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.190050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.190076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.190298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.190511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.190539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.190866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.191131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.191159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.191369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.191611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.191668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.191921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.192116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.192141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.192345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.192558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.192585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.192781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.192995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.193023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.193263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.193625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.193685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.193898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.194139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.194167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.194358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.194709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.194779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.841 [2024-05-15 02:39:11.195094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.195284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.841 [2024-05-15 02:39:11.195311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.841 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.195528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.195991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.196020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.196258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.196467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.196495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.196712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.196915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.196945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.197151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.197375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.197402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.197603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.197795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.197821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.198047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.198261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.198289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.198492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.198663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.198690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.198890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.199105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.199133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.199359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.199547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.199571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.199817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.199977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.200002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.200218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.200400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.200425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.200653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.200866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.200894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.201121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.201425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.201485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.201692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.201957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.202000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.202198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.202421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.202446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.202728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.202996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.203031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.203203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.203446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.203496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.203740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.203949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.203978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.204221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.204381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.204406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.204619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.204857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.204881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.205083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.205302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.205330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.205551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.205741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.205766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.205950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.206135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.206162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.206348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.206557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.206586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.206776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.206951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.206976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.207180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.207428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.207456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.207698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.207911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.208008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.208266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.208481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.208506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.208701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.208879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.208904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.209106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.209370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.209420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.209663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.209897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.209925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.210116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.210303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.210330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.210546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.210813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.210866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.211085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.211253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.211278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.211489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.211824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.211875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.212090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.212352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.212402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.212644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.212860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.212887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.213090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.213280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.213305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.213465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.213652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.213681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.213878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.214087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.214116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.214334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.214526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.214550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.214765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.214959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.214989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.842 [2024-05-15 02:39:11.215187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.215409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.842 [2024-05-15 02:39:11.215463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.842 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.215845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.216082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.216110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.216347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.216560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.216587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.216803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.217012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.217040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.217248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.217461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.217488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.217726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.217967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.217995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.218169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.218333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.218358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.218588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.218791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.218818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.219057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.219295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.219323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.219588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.219786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.219814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.220000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.220240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.220290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.220480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.220716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.220743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.220923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.221131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.221159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.221479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.221818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.221843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.222038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.222290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.222317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.222503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.222749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.222776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.223036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.223231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.223256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.223561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.223851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.223876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.224071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.224247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.224275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.224509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.224698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.224722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.224942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.225160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.225190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.225474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.225701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.225728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.225972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.226194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.226222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.226465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.226660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.226688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.226908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.227133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.227158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.227374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.227659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.227687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.227907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.228105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.228130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.228363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.228577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.228604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.228816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.229032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.229061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:23.843 qpair failed and we were unable to recover it. 00:22:23.843 [2024-05-15 02:39:11.229340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.843 [2024-05-15 02:39:11.229732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.229780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.230008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.230196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.230223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.230467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.230911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.230982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.231197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.231555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.231611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.232024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.232244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.232273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.232442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.232680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.232709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.232913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.233131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.233159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.233341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.233528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.233557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.233802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.233998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.234024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.234195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.234517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.234565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.234772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.234981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.235009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.235205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.235447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.235475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.235755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.235963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.235991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.236201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.236412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.236439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.236652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.236886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.236914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.237116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.237326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.237356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.237624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.237838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.237862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.238053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.238269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.238296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.238534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.238741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.238774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.113 qpair failed and we were unable to recover it. 00:22:24.113 [2024-05-15 02:39:11.238956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.113 [2024-05-15 02:39:11.239176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.239201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.239412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.239708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.239733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.239919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.240116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.240146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.240359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.240604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.240629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.240848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.241065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.241093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.241408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.241635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.241659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.241893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.242129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.242155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.242379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.242623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.242648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.242864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.243084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.243113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.243525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.243927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.243996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.244214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.244664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.244716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.244939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.245164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.245189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.245384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.245592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.245616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.245847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.246055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.246084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.246302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.246591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.246651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.246836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.247058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.247087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.247283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.247487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.247514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.247791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.248016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.248041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.248259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.248554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.248614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.248829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.249041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.249071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.249302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.249511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.249536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.249696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.249901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.249935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.250189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.250473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.250525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.250748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.250956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.250985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.251233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.251487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.251545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.251987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.252198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.252226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.252446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.252637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.252662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.252851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.253014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.253040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.253257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.253526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.253575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.253977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.254191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.254219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.114 qpair failed and we were unable to recover it. 00:22:24.114 [2024-05-15 02:39:11.254439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.114 [2024-05-15 02:39:11.254653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.254683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.254927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.255146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.255171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.255361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.255721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.255776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.256066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.256306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.256331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.256532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.256743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.256768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.256956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.257142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.257170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.257381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.257749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.257807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.258059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.258293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.258318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.258511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.258727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.258752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.258969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.259187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.259215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.259420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.259608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.259635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.259813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.260025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.260054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.260271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.260462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.260487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.260680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.260887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.260915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.261164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.261535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.261584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.261866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.262072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.262101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.262336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.262671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.262728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.262940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.263162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.263187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.263482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.263690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.263717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.263956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.264196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.264224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.264436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.264824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.264880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.265134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.265379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.265432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.265627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.265828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.265853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.266026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.266239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.266268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.266475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.266692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.266719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.266927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.267153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.267178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.267379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.267545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.267584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.267860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.268153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.268178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.268412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.268659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.268709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.268964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.269124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.269148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.115 qpair failed and we were unable to recover it. 00:22:24.115 [2024-05-15 02:39:11.269386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.115 [2024-05-15 02:39:11.269751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.269807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.270048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.270304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.270361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.270602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.270813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.270841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.271064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.271325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.271374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.271611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.271849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.271877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.272097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.272280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.272304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.272539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.272716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.272744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.272934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.273105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.273132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.273376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.273616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.273644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.273877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.274058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.274086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.274278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.274651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.274698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.274912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.275144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.275169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.275408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.275614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.275641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.275864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.276085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.276114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.276295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.276482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.276507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.276675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.276884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.276911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.277125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.277345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.277372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.277563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.277747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.277772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.277965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.278127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.278152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.278358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.278581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.278605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.278801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.278993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.279019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.279319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.279650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.279678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.279871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.280053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.280082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.280300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.280457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.280484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.280680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.280918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.280951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.281183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.281608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.281654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.281865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.282104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.282132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.282347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.282566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.282591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.282831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.283038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.283067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.283452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.283747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.283777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.116 qpair failed and we were unable to recover it. 00:22:24.116 [2024-05-15 02:39:11.284037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.116 [2024-05-15 02:39:11.284333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.284384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.284570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.284899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.284959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.285197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.285445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.285472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.285698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.285944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.285972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.286188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.286427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.286454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.286669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.286885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.286909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.287131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.287375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.287400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.287614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.287863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.287887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.288089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.288317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.288344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.288594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.288830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.288857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.289048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.289237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.289267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.289648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.289927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.289959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.290166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.290391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.290419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.290645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.290848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.290875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.291058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.291275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.291303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.291605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.291870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.291894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.292175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.292531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.292585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.292826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.292984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.293010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.293193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.293375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.293399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.293586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.293773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.293800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.294036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.294334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.294362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.294575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.294765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.294794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.294990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.295220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.295248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.117 [2024-05-15 02:39:11.295438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.295649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.117 [2024-05-15 02:39:11.295677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.117 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.295863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.296088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.296116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.296297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.296496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.296523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.296727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.296941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.296971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.297197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.297423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.297450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.297692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.297934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.297962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.298179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.298344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.298368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.298575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.298842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.298866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.299081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.299384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.299409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.299602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.299823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.299853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.300076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.300370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.300398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.300647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.300890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.300918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.301145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.301387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.301415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.301609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.301828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.301856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.302072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.302265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.302292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.302480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.302668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.302697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.302882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.303104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.303130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.303292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.303507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.303534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.303710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.303920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.303955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.304171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.304337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.304362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.304520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.304715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.304739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.304963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.305204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.305229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.305418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.305658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.305686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.305933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.306152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.306177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.306535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.306847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.306885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.307117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.307305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.307333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.307556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.307761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.307789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.308006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.308176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.308201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.308377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.308619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.308648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.308897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.309147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.309176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.118 [2024-05-15 02:39:11.309387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.309587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.118 [2024-05-15 02:39:11.309615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.118 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.309829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.310022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.310052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.310277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.310514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.310541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.310752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.310960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.310991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.311241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.311608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.311666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.311881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.312094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.312119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.312386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.312599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.312624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.312841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.313032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.313060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.313275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.313557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.313584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.313789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.314032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.314061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.314324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.314615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.314644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.314832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.315073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.315101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.315346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.315565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.315590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.315816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.316052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.316080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.316452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.316694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.316721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.316939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.317152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.317180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.317391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.317752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.317804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.318012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.318203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.318231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.318423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.318667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.318694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.318897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.319100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.319135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.319381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.319669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.319718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.319946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.320165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.320193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.320570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.321007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.321035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.321255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.321635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.321687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.321902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.322104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.322132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.322346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.322542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.322569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.322794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.322981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.323010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.323188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.323435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.323463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.323656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.323897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.323922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.324143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.324494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.324553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.119 [2024-05-15 02:39:11.324984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.325219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.119 [2024-05-15 02:39:11.325247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.119 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.325460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.325772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.325840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.326083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.326268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.326295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.326508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.326748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.326773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.327010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.327222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.327252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.327500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.327673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.327698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.327858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.328046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.328071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.328343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.328618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.328645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.328818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.329046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.329071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.329292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.329482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.329507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.329758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.330006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.330032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.330242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.330453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.330478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.330671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.330861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.330885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.331046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.331222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.331250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.331466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.331628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.331668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.331876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.332088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.332116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.332414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.332874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.332938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.333158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.333379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.333404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.333634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.333863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.333890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.334085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.334308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.334333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.334726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.334998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.335027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.335265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.335481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.335511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.335701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.335990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.336019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.336232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.336424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.336448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.336637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.336807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.336833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.337002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.337162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.337188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.337423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.337713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.337777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.338088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.338302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.338329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.338586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.338761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.338786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.339007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.339259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.120 [2024-05-15 02:39:11.339287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.120 qpair failed and we were unable to recover it. 00:22:24.120 [2024-05-15 02:39:11.339470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.339685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.339713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.339951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.340137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.340166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.340354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.340702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.340773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.341001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.341855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.341887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.342102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.342285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.342312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.342502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.342691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.342721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.342926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.343157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.343183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.343415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.344063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.344095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.344345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.344575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.344604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.344825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.345019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.345045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.345266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.345619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.345678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.345925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.346176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.346205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.346405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.346604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.346632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.346831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.346996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.347021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.347215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.347408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.347436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.347684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.347894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.347922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.348114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.348350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.348376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.348598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.348917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.349001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.349223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.349406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.349431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.349651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.349838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.349864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.350065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.350279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.350307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.350498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.350739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.350764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.350988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.351180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.351210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.351415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.351587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.351613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.351835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.352061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.352090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.352301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.352524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.352576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.352994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.353214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.353248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.353435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.353606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.353632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.353876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.354079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.354108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.354301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.354585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.121 [2024-05-15 02:39:11.354641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.121 qpair failed and we were unable to recover it. 00:22:24.121 [2024-05-15 02:39:11.354856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.355100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.355128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.355307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.355548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.355576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.355782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.355991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.356019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.356194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.356384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.356414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.356645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.356840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.356866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.357076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.357238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.357262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.357498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.357921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.357985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.358192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.358456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.358503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.358795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.359052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.359078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.359273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.359492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.359521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.359717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.359937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.359966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.360199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.360406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.360434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.360649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.360859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.360888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.361101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.361318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.361344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.361589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.361995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.362024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.362224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.362434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.362462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.362671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.362860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.362888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.363120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.363469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.363533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.363865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.364173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.364202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.364444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.364661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.364689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.364908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.365106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.365134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.365341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.365654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.365713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.365951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.366169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.366197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.366464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.366780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.366805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.122 [2024-05-15 02:39:11.367053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.367304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.122 [2024-05-15 02:39:11.367333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.122 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.367546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.367767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.367794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.367988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.368183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.368209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.368435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.368644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.368673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.368889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.369139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.369167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.369364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.369582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.369612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.369872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.370131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.370156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.370357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.370670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.370723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.370954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.371161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.371193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.371377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.371618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.371646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.371884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.372079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.372104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.372300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.372486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.372514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.372724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.372903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.372951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.373150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.373345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.373374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.373626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.373813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.373838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.374040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.374207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.374232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.374401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.374671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.374722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.374908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.375111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.375144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.375379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.375548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.375573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.375791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.376009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.376038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.376233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.376486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.376538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.376783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.376964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.376993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.377185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.377365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.377394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.377585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.377794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.377821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.378033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.378278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.378303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.378506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.378702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.378728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.378961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.379149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.379177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.379361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.379577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.379602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.379816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.380033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.380062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.380256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.380441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.380467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.123 [2024-05-15 02:39:11.380685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.380915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.123 [2024-05-15 02:39:11.380952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.123 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.381199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.381483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.381539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.381875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.382089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.382115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.382291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.382480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.382511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.382717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.382952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.382981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.383215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.383569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.383623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.383939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.384129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.384158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.384352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.384510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.384535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.384747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.384986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.385012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.385182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.385409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.385434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.385757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.385969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.385997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.386235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.386481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.386534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.386770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.386989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.387016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.387182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.387427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.387454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.387776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.388021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.388051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.388239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.388409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.388435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.388656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.388881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.388909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.389133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.389341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.389371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.389661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.389902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.389935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.390153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.390395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.390421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.390643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.390836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.390861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.391045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.391255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.391282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.391465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.391717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.391778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.392023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.392219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.392249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.392499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.392815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.392883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.393106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.393312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.393340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.393564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.394011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.394040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.394260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.394558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.394586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.394753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.394988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.395019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.395206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.395412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.395464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.124 [2024-05-15 02:39:11.395706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.395921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.124 [2024-05-15 02:39:11.395954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.124 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.396172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.396370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.396441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.396659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.396828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.396853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.397049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.397213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.397244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.397718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.397950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.397979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.398217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.398442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.398470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.398710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.398948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.398991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.399168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.399402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.399428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.399639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.399884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.399914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.400091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.400264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.400288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.400542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.400756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.400783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.400998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.401202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.401230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.401501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.401723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.401750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.401967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.402144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.402169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.402433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.402598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.402623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.402881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.403087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.403116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.403303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.403631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.403682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.403892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.404090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.404115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.404305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.404487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.404515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.404757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.404974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.405003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.405227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.405435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.405462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.405674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.405916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.405949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.406175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.406413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.406441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.406662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.406868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.406895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.407112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.407344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.407370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.407556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.407743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.407770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.407948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.408173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.408200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.408393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.408632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.408660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.408904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.409098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.409127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.409332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.409520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.409545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.125 [2024-05-15 02:39:11.409704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.409896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.125 [2024-05-15 02:39:11.409921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.125 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.410151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.410541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.410600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.410997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.411254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.411280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.411449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.411691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.411719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.411896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.412120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.412148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.412351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.413132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.413167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.413557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.413897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.413934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.414128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.414326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.414356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.415065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.415292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.415322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.415578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.415861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.415912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.416182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.416345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.416370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.416564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.416724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.416749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.416909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.417121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.417148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.417374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.417559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.417585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.417821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.418038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.418067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.418281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.418491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.418519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.418689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.418942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.418968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.419129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.419442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.419504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.419895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.420151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.420178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.420403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.420645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.420673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.420867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.421135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.421161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.421352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.421535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.421560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.421740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.421955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.421983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.422177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.422366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.422393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.422597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.422788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.422817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.423006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.423190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.126 [2024-05-15 02:39:11.423215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.126 qpair failed and we were unable to recover it. 00:22:24.126 [2024-05-15 02:39:11.423468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.423678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.423705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.423913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.424131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.424160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.424385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.424646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.424675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.424916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.425126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.425159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.425416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.425627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.425655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.425895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.426121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.426150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.426383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.426611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.426661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.426887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.427106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.427135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.427364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.427576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.427610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.427849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.428085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.428114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.428333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.428645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.428698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.428941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.429157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.429182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.429367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.429704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.429764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.429989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.430162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.430187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.430364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.430605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.430633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.430853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.431073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.431105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.431377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.431715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.431768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.431989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.432203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.432230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.432476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.432657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.432686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.432876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.433115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.433144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.433482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.433967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.433996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.434189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.434362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.434387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.434576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.434767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.434792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.435011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.435196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.435234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.435571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.435831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.435860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.436108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.436506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.436557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.436780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.436993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.437021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.437224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.437611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.437676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.437890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.438132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.438162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.438379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.438620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.127 [2024-05-15 02:39:11.438647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.127 qpair failed and we were unable to recover it. 00:22:24.127 [2024-05-15 02:39:11.438853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.439083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.439111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.439310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.439534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.439560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.439759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.439956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.439983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.440172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.440382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.440410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.440631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.440815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.440840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.441012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.441175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.441200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.441443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.441802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.441856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.442086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.442287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.442314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.442503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.442739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.442789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.443020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.443232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.443261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.443552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.443969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.443997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.444207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.444387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.444415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.444649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.444830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.444858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.445069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.445257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.445286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.445560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.445853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.445881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.446098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.446316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.446346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.446601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.446879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.446923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.447182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.447392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.447420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.447658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.447865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.447893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.448128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.448307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.448335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.448511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.448743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.448767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.448987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.449230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.449258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.449696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.449945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.449971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.450147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.450385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.450413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.450652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.450862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.450894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.451088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.451403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.451463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.451717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.451961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.451990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.452179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.452524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.452579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.452815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.453069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.453098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.453307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.453548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.128 [2024-05-15 02:39:11.453577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.128 qpair failed and we were unable to recover it. 00:22:24.128 [2024-05-15 02:39:11.453822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.454054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.454079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.454291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.454509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.454538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.454751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.454966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.454995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.455170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.455381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.455408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.455612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.455807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.455839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.456063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.456306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.456333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.456544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.456812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.456861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.457060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.457303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.457329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.457492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.457739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.457767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.457999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.458187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.458216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.458426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.458825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.458876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.459099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.459406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.459459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.459855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.460130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.460158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.460352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.460562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.460590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.460804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.461039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.461068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.461257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.461583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.461651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.461857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.462096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.462125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.462328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.462640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.462694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.462906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.463137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.463166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.463346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.463597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.463623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.463835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.464056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.464085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.464273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.464471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.464495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.464688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.464872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.464899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.465087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.465326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.465355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.465655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.465944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.465973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.466163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.466366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.466391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.466610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.466819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.466849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.467058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.467294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.467322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.467702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.467952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.467981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.468177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.468351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.129 [2024-05-15 02:39:11.468377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.129 qpair failed and we were unable to recover it. 00:22:24.129 [2024-05-15 02:39:11.468591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.468832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.468860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.469093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.469307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.469346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.469533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.469717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.469745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.469968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.470140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.470165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.470385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.470672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.470700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.470910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.471130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.471160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.471565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.471882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.471908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.472135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.472459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.472510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.472732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.472940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.472970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.473158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.473390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.473416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.473728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.474020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.474049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.474264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.474466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.474492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.474706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.474949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.474992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.475235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.475521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.475549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.475728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.475954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.475981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.476167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.476378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.476403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.476595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.476892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.476960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.477160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.477382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.477410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.477582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.477755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.477782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.477998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.478211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.478239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.478443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.478661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.478687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.478939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.479128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.479156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.479454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.479704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.479733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.479970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.480159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.480186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.480377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.480595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.480619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.480778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.480995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.481027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.481426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.481455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.481699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.481887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.481912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.482098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.482320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.482345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.130 qpair failed and we were unable to recover it. 00:22:24.130 [2024-05-15 02:39:11.482537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.482990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.130 [2024-05-15 02:39:11.483023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.483233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.483568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.483614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.483851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.484023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.484049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.484240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.484679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.484711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.484909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.485113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.485141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.485355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.485577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.485604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.485824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.486051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.486081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.486302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.486501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.486526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.486772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.487024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.487050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.487250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.487477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.487505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.487686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.487879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.487906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.488166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.488417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.488442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.488652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.488832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.488860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.489049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.489268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.489293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.489484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.489673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.489697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.489911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.490136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.490164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.490374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.490670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.490719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.490967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.491163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.491191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.491415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.491587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.491612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.491986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.492192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.492221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.492432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.492621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.492649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.492839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.493028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.493057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.493262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.493433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.493457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.493675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.493889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.493919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.494142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.494409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.494435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.494765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.495003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.495032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.131 qpair failed and we were unable to recover it. 00:22:24.131 [2024-05-15 02:39:11.495234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.131 [2024-05-15 02:39:11.495538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.495597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.495840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.496059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.496090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.496282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.496467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.496495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.496776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.497015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.497044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.497260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.497482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.497511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.497730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.497985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.498011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.498199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.498443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.498467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.498630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.498811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.498835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.499036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.499256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.499286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.499526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.499716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.499740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.499926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.500172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.500200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.500499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.500860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.500911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.501120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.501367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.501416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.501653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.501860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.501888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.502136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.502395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.502423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.502637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.502848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.502875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.503071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.503295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.503323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.503532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.503738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.503765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.503967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.504180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.504209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.504591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.504857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.504884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.505080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.505300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.505328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.505541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.505860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.505935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.506150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.506459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.506517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.506737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.506909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.506966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.507155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.507396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.507425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.507637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.507857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.507886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.508145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.508330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.508355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.508679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.508906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.508952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.509148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.509409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.509459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.509669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.509853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.509883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.132 qpair failed and we were unable to recover it. 00:22:24.132 [2024-05-15 02:39:11.510112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.132 [2024-05-15 02:39:11.510272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.510306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.510488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.510816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.510870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.511109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.511279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.511304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.511521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.511761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.511791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.511975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.512172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.512197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.512411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.512620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.512651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.512896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.513122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.513147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.513329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.513539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.513569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.513811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.514014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.514043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.514256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.514513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.514562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.514804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.515001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.515030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.515207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.515394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.515422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.515640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.515857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.515881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.516123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.516339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.133 [2024-05-15 02:39:11.516370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.133 qpair failed and we were unable to recover it. 00:22:24.133 [2024-05-15 02:39:11.516567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.405 [2024-05-15 02:39:11.516859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.405 [2024-05-15 02:39:11.516914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.405 qpair failed and we were unable to recover it. 00:22:24.405 [2024-05-15 02:39:11.517119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.405 [2024-05-15 02:39:11.517326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.405 [2024-05-15 02:39:11.517356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.517563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.517854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.517911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.518143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.518319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.518345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.518517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.518711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.518737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.518936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.519151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.519178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.519408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.519656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.519718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.519966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.520179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.520207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.520391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.520560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.520585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.520777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.521022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.521048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.521265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.521490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.521519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.521901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.522119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.522146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.522354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.522519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.522544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.522756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.522969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.522998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.523213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.523370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.523395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.523618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.523830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.523857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.524077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.524243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.524268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.524473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.524718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.524765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.525015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.525237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.525265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.525479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.525715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.525742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.525961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.526148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.526173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.526387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.526583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.526609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.526840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.527090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.527119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.527307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.527471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.527513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.527752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.527967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.527997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.528189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.528350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.528393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.528608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.528823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.528848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.529093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.529442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.529494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.529731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.529989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.530019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.530210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.530440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.530468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.530681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.530908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.530960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.406 qpair failed and we were unable to recover it. 00:22:24.406 [2024-05-15 02:39:11.531179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.406 [2024-05-15 02:39:11.531403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.531428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.531612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.531860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.531887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.532081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.532360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.532410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.532651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.532904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.532941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.533180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.533407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.533432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.533649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.533886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.533914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.534158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.534433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.534484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.534723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.534953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.534979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.535233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.535501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.535528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.535774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.536020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.536049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.536228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.536422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.536449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.536673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.536883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.536910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.537135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.537394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.537421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.537607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.537977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.538005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.538256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.538492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.538519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.538756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.538957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.538986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.539214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.539431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.539459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.539672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.539914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.539947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.540166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.540475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.540534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.540762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.540968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.540997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.541234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.541453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.541478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.541667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.541845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.541872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.542059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.542254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.542279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.542470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.542636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.542661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.542885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.543139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.543164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.543341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.543526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.543551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.543743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.543947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.543976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.544191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.544384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.544409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.544701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.544947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.544974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.545171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.545434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.407 [2024-05-15 02:39:11.545484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.407 qpair failed and we were unable to recover it. 00:22:24.407 [2024-05-15 02:39:11.545717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.545901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.545928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.546157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.546468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.546520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.546757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.546945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.546973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.547154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.547320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.547361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.547604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.547796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.547821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.548032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.548249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.548277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.548670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.548947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.548976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.549158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.549330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.549355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.549572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.549879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.549928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.550161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.550482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.550536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.550825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.551066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.551094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.551286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.551533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.551560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.551804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.552049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.552077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.552290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.552555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.552605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.552867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.553112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.553141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.553353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.553593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.553643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.553858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.554066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.554094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.554266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.554479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.554507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.554718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.554970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.555000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.555206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.555408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.555453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.555708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.555885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.555915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.556102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.556310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.556363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.556667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.556874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.556901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.557120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.557403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.557452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.557700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.557946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.557972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.558192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.558402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.558429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.558705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.558947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.558975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.559195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.559436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.559485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.559728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.559921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.559961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.560173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.560362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.408 [2024-05-15 02:39:11.560392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.408 qpair failed and we were unable to recover it. 00:22:24.408 [2024-05-15 02:39:11.560635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.560797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.560822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.561041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.561267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.561294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.561536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.561832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.561859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.562079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.562266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.562294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.562471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.562685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.562710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.562897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.563140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.563166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.563404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.563619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.563668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.563874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.564098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.564124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.564288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.564604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.564671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.564888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.565059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.565085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.565281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.565555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.565606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.565817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.566048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.566077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.566314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.566525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.566553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.566790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.566960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.566986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.567178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.567365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.567389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.567610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.567821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.567848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.568027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.568220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.568244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.568427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.568636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.568664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.568874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.569113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.569141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.569366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.569551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.569579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.569777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.569990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.570019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.570200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.570441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.570469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.570686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.570864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.570891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.571115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.571330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.571357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.571534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.571745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.571773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.571989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.572207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.572234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.572448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.572692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.572716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.572896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.573091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.573119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.573343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.573546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.573572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.409 qpair failed and we were unable to recover it. 00:22:24.409 [2024-05-15 02:39:11.573792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.409 [2024-05-15 02:39:11.573988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.574014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.574365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.574390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.574636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.574844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.574872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.575110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.575386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.575413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.575635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.575848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.575875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.576066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.576256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.576297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.576476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.576649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.576679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.576900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.577101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.577127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.577346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.577611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.577662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.577899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.578144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.578172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.578386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.578772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.578832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.579069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.579232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.579257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.579458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.579647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.579676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.579877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.580092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.580121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.580359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.580579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.580606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.580788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.581093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.581122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.581341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.581648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.581712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.581949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.582151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.582178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.582365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.582554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.582584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.582792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.583004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.583033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.583252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.583432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.583461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.583676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.583887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.583915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.584172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.584355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.584380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.584723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.584962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.584990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.410 qpair failed and we were unable to recover it. 00:22:24.410 [2024-05-15 02:39:11.585225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-05-15 02:39:11.585409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.585433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.585686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.585870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.585897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.586112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.586300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.586325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.586547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.586760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.586787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.587001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.587169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.587194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.587381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.587612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.587639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.587849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.588089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.588117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.588540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.588956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.589000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.589214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.589457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.589506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.589753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.589964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.589992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.590184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.590484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.590537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.590820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.590980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.591007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.591174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.591390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.591418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.591657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.591860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.591887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.592098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.592284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.592311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.592501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.592754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.592814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.593041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.593230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.593258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.593517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.593958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.593987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.594168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.594420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.594445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.594691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.594951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.594980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.595197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.595386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.595414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.595661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.595878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.595902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.596123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.596307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.596334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.596661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.596890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.596918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.597115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.597274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.597298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.597516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.597755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.597804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.598029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.598232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.598261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.598480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.598663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.598729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.598952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.599201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.599229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.599475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.599648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.599675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.411 qpair failed and we were unable to recover it. 00:22:24.411 [2024-05-15 02:39:11.599890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.411 [2024-05-15 02:39:11.600083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.600113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.600331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.600525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.600562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.600783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.600971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.600998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.601211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.601402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.601426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.601666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.601870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.601897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.602118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.602355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.602380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.602578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.602918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.602972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.603203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.603466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.603515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.603726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.603966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.603995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.604187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.604501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.604549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.604775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.604969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.604995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.605199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.605471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.605526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.605716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.605956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.605985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.606193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.606473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.606508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.606719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.606908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.606960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.607175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.607393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.607421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.607607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.607826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.607851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.608082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.608330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.608386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.608606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.608801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.608826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.609042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.609258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.609286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.609526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.609889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.609952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.610196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.610402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.610429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.610671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.610836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.610862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.611064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.611250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.611274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.611460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.611698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.611759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.611973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.612222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.612251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.612439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.612656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.612686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.412 qpair failed and we were unable to recover it. 00:22:24.412 [2024-05-15 02:39:11.612926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.412 [2024-05-15 02:39:11.613142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.613170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.613408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.613642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.613679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.613902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.614142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.614172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.614393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.614559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.614584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.614775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.614957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.614986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.615203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.615462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.615487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.615665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.615834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.615859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.616046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.616273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.616303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.616521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.616732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.616773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.617046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.617260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.617307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.617529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.617773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.617801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.618027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.618218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.618252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.618464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.618673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.618717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.618902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.619122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.619152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.619369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.619630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.619676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.619889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.620083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.620113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.620331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.620545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.620573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.620818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.621031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.621060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.621274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.621518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.621547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.621736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.621897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.621922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.622139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.622340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.622380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.622603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.622834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.622862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.623077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.623295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.623323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.623541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.623707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.623734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.623934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.624144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.624169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.624421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.624620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.624645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.624889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.625118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.625144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.625328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.625575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.625603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.625846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.626101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.626130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.413 [2024-05-15 02:39:11.626315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.626561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.413 [2024-05-15 02:39:11.626607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.413 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.626961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.627178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.627207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.627426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.627771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.627832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.628045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.628264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.628291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.628469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.628719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.628747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.628972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.629194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.629222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.629410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.629601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.629626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.629840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.630064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.630091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.630277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.630533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.630582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.630845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.631048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.631079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.631264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.631460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.631485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.631708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.631901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.631935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.632136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.632358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.632393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.632603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.632790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.632821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.633066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.633257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.633284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.633493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.633677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.633706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.633919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.634179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.634204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.634428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.634625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.634652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.634863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.635090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.635120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.635302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.635483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.635511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.635688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.635924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.635959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.636170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.636395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.636419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.636622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.636873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.636906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.637102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.637318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.637348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.637559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.637737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.637765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.637990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.638210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.638247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.638442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.638670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.414 [2024-05-15 02:39:11.638702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.414 qpair failed and we were unable to recover it. 00:22:24.414 [2024-05-15 02:39:11.638945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.639162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.639190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.639394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.639628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.639676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.639886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.640080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.640109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.640297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.640541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.640569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.640810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.641028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.641056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.641271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.641515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.641543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.641766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.642027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.642054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.642216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.642429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.642458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.642695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.642911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.642957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.643137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.643319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.643347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.643601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.643792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.643817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.644020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.644213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.644252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.644493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.644706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.644735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.644953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.645170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.645198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.645392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.645662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.645706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.645942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.646169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.646199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.646441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.646652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.646677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.646858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.647075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.647101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.647268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.647428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.647453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.647673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.647967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.647996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.648213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.648414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.648438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.648654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.648863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.648893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.649108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.649357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.649405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.649627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.649840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.649868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.650110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.650330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.650357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.650565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.650830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.650863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.651077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.651283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.651329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.651547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.651740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.651765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.415 qpair failed and we were unable to recover it. 00:22:24.415 [2024-05-15 02:39:11.651954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.415 [2024-05-15 02:39:11.652124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.652152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.652393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.652582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.652607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.652833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.653086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.653115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.653325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.653521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.653546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.653700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.653905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.653950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.654145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.654378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.654404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.654624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.654832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.654860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.655067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.655249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.655295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.655487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.655714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.655740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.655956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.656170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.656197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.656492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.656725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.656753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.656945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.657142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.657167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.657337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.657503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.657545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.657732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.657916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.657965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.658163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.658378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.658405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.658580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.658826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.658871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.659128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.659312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.659340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.659553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.659762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.659789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.660012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.660198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.660230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.660450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.660676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.660701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.660965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.661179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.661206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.661416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.661694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.661749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.661973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.662207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.662234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.662437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.662634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.662659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.662875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.663093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.663118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.663337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.663597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.663621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.663837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.664013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.664042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.664259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.664506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.664552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.664800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.665016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.665045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.665221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.665420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.416 [2024-05-15 02:39:11.665466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.416 qpair failed and we were unable to recover it. 00:22:24.416 [2024-05-15 02:39:11.665726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.665962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.665988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.666146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.666363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.666390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.666576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.666783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.666811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.667048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.667238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.667262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.667497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.667687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.667728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.667964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.668156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.668183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.668389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.668649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.668692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.668905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.669110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.669135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.669384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.669604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.669629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.669818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.670042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.670070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.670254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.670473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.670497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.670708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.670924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.670957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.671169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.671359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.671386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.671626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.671875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.671903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.672104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.672288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.672318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.672533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.672745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.672772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.672955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.673167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.673192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.673380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.673573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.673597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.673812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.674058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.674087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.674297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.674472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.674499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.674717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.674919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.674952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.675173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.675362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.675389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.675624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.675856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.675883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.676108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.676270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.676296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.676478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.676685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.676709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.676870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.677087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.677112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.677302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.677657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.677716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.677957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.678197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.678225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.678451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.678661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.678689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.678939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.679184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.417 [2024-05-15 02:39:11.679212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.417 qpair failed and we were unable to recover it. 00:22:24.417 [2024-05-15 02:39:11.679412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.679625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.679652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.679890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.680069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.680099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.680340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.680644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.680691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.680911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.681101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.681127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.681365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.681555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.681579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.681756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.681998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.682024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.682196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.682392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.682418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.682636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.682870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.682904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.683133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.683317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.683345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.683571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.683807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.683861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.684092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.684275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.684300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.684463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.684674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.684701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.684919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.685126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.685151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.685375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.685585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.685612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.685876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.686115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.686143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.686356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.686548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.686575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.686819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.687084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.687109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.687279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.687491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.687518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.687740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.687951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.687976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.688190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.688423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.688451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.688694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.688906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.688941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.689147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.689352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.689379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.689623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.689859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.689884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.690092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.690277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.690305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.690481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.690666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.690694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.690868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.691049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.691078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.691394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.691720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.691747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.691995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.692192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.692217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.692444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.692680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.692725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.692941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.693133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.418 [2024-05-15 02:39:11.693161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.418 qpair failed and we were unable to recover it. 00:22:24.418 [2024-05-15 02:39:11.693376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.693572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.693596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.693760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.693943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.693971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.694152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.694353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.694380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.694586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.694971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.694999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.695193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.695403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.695431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.695659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.695828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.695853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.696069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.696281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.696310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.696524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.696755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.696779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.696964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.697176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.697203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.697397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.697573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.697602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.697847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.698096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.698124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.698320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.698554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.698581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.698783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.699048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.699080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.699330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.699560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.699608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.699840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.700078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.700107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.700317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.700644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.700695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.700912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.701169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.701197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.701414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.701581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.701606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.701789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.701973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.701999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.702227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.702477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.702505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.702705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.702947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.702976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.703190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.703409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.703433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.703654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.703855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.703882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.704085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.704295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.704319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.704548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.704745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.704770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.419 [2024-05-15 02:39:11.704959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.705202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.419 [2024-05-15 02:39:11.705230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.419 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.705444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.705653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.705680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.705893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.706108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.706137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.706360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.706534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.706562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.706778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.706995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.707024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.707241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.707451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.707502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.707709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.707922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.707953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.708146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.708312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.708336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.708510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.708884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.708947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.709194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.709379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.709404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.709646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.709870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.709897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.710121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.710363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.710390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.710625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.710865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.710892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.711103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.711339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.711364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.711583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.711787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.711811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.712051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.712257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.712284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.712501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.712668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.712693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.712879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.713118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.713143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.713329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.713579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.713606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.713781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.713998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.714025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.714267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.714481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.714511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.714731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.714955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.714985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.715200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.715428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.715452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.715689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.715873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.715900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.716125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.716335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.716380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.716570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.716749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.716776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.716964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.717139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.717166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.717454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.717642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.717668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.717837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.718080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.718108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.718300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.718487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.718511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.420 qpair failed and we were unable to recover it. 00:22:24.420 [2024-05-15 02:39:11.718706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.718891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.420 [2024-05-15 02:39:11.718915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.719140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.719333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.719360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.719572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.719781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.719808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.720016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.720231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.720258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.720468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.720652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.720679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.720899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.721077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.721102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.721275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.721439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.721463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.721655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.721836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.721864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.722040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.722219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.722248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.722492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.722876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.722926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.723188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.723374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.723402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.723648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.723865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.723892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.724118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.724358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.724404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.724730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.724922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.724952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.725146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.725333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.725357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.725577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.725811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.725838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.726031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.726247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.726298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.726586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.726796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.726823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.727056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.727219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.727244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.727485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.727669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.727696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.727907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.728142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.728168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.728369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.728697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.728757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.728994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.729210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.729256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.729466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.729672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.729701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.729940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.730166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.730192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.730427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.730646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.730673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.730877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.731057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.731091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.731300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.731578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.731605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.731822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.732033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.732059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.732251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.732454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.732481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.421 qpair failed and we were unable to recover it. 00:22:24.421 [2024-05-15 02:39:11.732683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.732895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.421 [2024-05-15 02:39:11.732922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.733116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.733479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.733533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.733790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.734006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.734035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.734271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.734490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.734522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.734722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.734892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.734919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.735140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.735365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.735392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.735679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.735870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.735899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.736134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.736378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.736402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.736669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.736891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.736918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.737173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.737420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.737444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.737683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.737919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.737950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.738227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.738489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.738533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.738717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.738910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.738965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.739210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.739422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.739449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.739659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.739870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.739899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.740090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.740320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.740366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.740597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.740925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.740996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.741191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.741435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.741480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.741720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.741941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.741970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.742182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.742425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.742470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.742682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.742842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.742867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.743086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.743288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.743317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.743565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.743839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.743884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.744104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.744345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.744390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.744644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.744858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.744885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.745091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.745308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.745335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.745541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.745749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.745794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.746014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.746223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.746251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.746486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.746696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.746724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.746912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.747160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.422 [2024-05-15 02:39:11.747188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.422 qpair failed and we were unable to recover it. 00:22:24.422 [2024-05-15 02:39:11.747441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.747683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.747708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.747928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.748149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.748178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.748380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.748624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.748652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.748867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.749082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.749110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.749285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.749519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.749564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.749893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.750132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.750158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.750350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.750519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.750544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.750758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.750968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.750996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.751208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.751564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.751612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.751917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.752167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.752194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.752386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.752629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.752674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.752893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.753081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.753109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.753291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.753501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.753526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.753680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.753888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.753915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.754164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.754449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.754513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.754733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.754964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.755006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.755251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.755472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.755500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.755790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.756014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.756039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.756234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.756390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.756414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.756620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.756806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.756831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.757038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.757296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.757344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.757581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.757765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.757792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.757984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.758175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.758201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.758397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.758613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.758641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.758858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.759068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.759096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.423 qpair failed and we were unable to recover it. 00:22:24.423 [2024-05-15 02:39:11.759337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.423 [2024-05-15 02:39:11.759504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.759547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.759763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.759943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.759969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.760126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.760287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.760317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.760502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.760682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.760707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.760891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.761089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.761118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.761336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.761522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.761547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.761741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.761923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.761956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.762161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.762374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.762420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.762655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.762868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.762895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.763146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.763390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.763417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.763631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.763866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.763893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.764088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.764346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.764371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.764562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.764794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.764853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.765067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.765274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.765299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.765522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.765864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.765920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.766150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.766367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.766394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.766641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.766903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.766927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.767160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.767408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.767460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.767699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.767943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.767969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.768129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.768326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.768354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.768576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.768788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.768815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.769031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.769360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.769414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.769631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.769865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.769893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.770139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.770320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.770348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.770552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.770781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.770828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.771065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.771299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.771327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.771548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.771956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.772001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.772190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.772535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.772584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.772924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.773167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.773195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.773433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.773640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.424 [2024-05-15 02:39:11.773686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.424 qpair failed and we were unable to recover it. 00:22:24.424 [2024-05-15 02:39:11.773920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.774131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.774156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.774370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.774669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.774737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.774988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.775162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.775187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.775402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.775577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.775604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.775815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.776031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.776060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.776260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.776524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.776575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.776865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.777104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.777133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.777370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.777591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.777636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.777876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.778068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.778096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.778308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.778551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.778596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.778802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.778986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.779015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.779205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.779453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.779501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.779745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.779958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.779987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.780198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.780473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.780525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.780749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.780964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.780992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.781211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.781372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.781396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.781614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.781817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.781845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.782065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.782255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.782284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.782606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.782878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.782902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.783080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.783276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.783301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.783503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.783736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.783764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.783950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.784193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.784218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.784416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.784639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.784664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.784849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.785057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.785090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.785263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.785615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.785678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.785895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.786137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.786166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.786552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.786855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.786885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.787086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.787281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.787306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.787522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.787730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.787754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.425 qpair failed and we were unable to recover it. 00:22:24.425 [2024-05-15 02:39:11.787912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.788130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.425 [2024-05-15 02:39:11.788158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.788496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.788756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.788780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.789034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.789199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.789224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.789403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.789736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.789797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.790024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.790235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.790263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.790533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.790749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.790775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.790997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.791157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.791182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.791347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.791506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.791531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.791749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.791956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.791985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.792377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.792620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.792646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.792841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.793064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.793092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.793304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.793522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.793547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.793729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.793922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.793955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.794168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.794383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.794408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.794596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.794808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.794836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.795024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.795260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.795305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.795514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.795737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.795764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.796007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.796270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.796320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.796561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.796781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.796808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.797054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.797374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.797425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.797661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.797891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.797918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.798156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.798419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.798470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.798719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.798934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.798962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.799150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.799396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.799421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.799632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.800003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.800031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.800250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.800469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.800493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.800669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.800883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.800910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.801105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.801300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.801326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.801564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.801832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.801878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.802098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.802286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.426 [2024-05-15 02:39:11.802313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.426 qpair failed and we were unable to recover it. 00:22:24.426 [2024-05-15 02:39:11.802500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.802667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.802691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.802899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.803125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.803155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.803347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.803558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.803585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.803925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.804149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.804177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.804391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.804610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.804638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.804854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.805040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.805069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.805286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.805620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.805677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.805871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.806084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.806112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.806356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.806537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.806564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.806780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.806997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.807027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.427 [2024-05-15 02:39:11.807249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.807444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.427 [2024-05-15 02:39:11.807472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.427 qpair failed and we were unable to recover it. 00:22:24.699 [2024-05-15 02:39:11.807716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.699 [2024-05-15 02:39:11.807895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.699 [2024-05-15 02:39:11.807924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.699 qpair failed and we were unable to recover it. 00:22:24.699 [2024-05-15 02:39:11.808171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.699 [2024-05-15 02:39:11.808461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.699 [2024-05-15 02:39:11.808515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.699 qpair failed and we were unable to recover it. 00:22:24.699 [2024-05-15 02:39:11.808756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.699 [2024-05-15 02:39:11.808970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.699 [2024-05-15 02:39:11.808999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.699 qpair failed and we were unable to recover it. 00:22:24.699 [2024-05-15 02:39:11.809181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.809414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.809467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.809897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.810147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.810182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.810383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.810554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.810579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.810777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.811015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.811044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.811229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.811474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.811498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.811668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.811829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.811854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.812069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.812290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.812318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.812529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.812743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.812768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.813065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.813302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.813329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.813511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.813698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.813725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.813913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.814171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.814221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.814429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.814637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.814668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.814916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.815210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.815257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.815466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.815652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.815679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.815894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.816107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.816136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.816329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.816546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.816570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.816780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.817000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.817028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.817215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.817434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.817458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.817641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.817884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.817911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.818132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.818345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.818372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.818549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.818764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.818792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.819043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.819254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.819282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.819504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.819723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.819751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.819952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.820339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.820397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.820649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.820833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.820857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.821074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.821299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.821326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.821563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.821956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.821985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.822198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.822445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.822473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.822661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.822833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.822860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.823037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.823227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.823256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.700 qpair failed and we were unable to recover it. 00:22:24.700 [2024-05-15 02:39:11.823463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.823677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.700 [2024-05-15 02:39:11.823705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.823922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.824142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.824166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.824419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.824679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.824707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.824916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.825137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.825165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.825406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.825624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.825651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.825833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.826046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.826072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.826241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.826532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.826581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.827011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.827394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.827448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.827677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.827916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.827950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.828136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.828397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.828472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.828694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.828906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.828939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.829154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.829350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.829375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.829591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.829788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.829815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.830003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.830217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.830272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.830493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.830709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.830737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.830953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.831160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.831188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.831434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.831720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.831775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.831970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.832181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.832208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.832397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.832640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.832685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.832895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.833111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.833141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.833385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.833572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.833617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.833827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.834037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.834066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.834280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.834535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.834586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.834899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.835175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.835200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.835411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.835622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.835650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.835891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.836092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.836117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.836297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.836544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.836569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.836823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.837030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.837058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.837268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.837476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.837503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.837751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.837942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.701 [2024-05-15 02:39:11.837970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.701 qpair failed and we were unable to recover it. 00:22:24.701 [2024-05-15 02:39:11.838153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.838369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.838396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.838774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.839017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.839046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.839241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.839538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.839614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.839856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.840063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.840091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.840296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.840564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.840616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.840854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.841070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.841099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.841283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.841486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.841514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.841754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.841940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.841968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.842187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.842423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.842449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.842830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.843067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.843097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.843324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.843522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.843547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.843801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.844030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.844057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.844277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.844459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.844487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.844703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.844879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.844908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.845120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.845285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.845311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.845475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.845691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.845721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.845897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.846112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.846137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.846420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.846723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.846769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.846979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.847224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.847251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.847463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.847657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.847681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.847838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.848036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.848062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.848223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.848457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.848488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.848696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.848894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.848921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.849142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.849380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.849426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.849665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.849898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.849923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.850160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.850346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.850373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.850590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.850755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.850780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.850956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.851199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.851227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.851443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.851688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.851713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.851944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.852157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.702 [2024-05-15 02:39:11.852182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.702 qpair failed and we were unable to recover it. 00:22:24.702 [2024-05-15 02:39:11.852370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.852581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.852609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.852825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.852990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.853035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.853249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.853444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.853498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.853708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.853920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.853955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.854148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.854334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.854362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.854604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.854838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.854863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.855114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.855331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.855359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.855724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.855972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.856002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.856222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.856435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.856462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.856638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.856838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.856866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.857070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.857281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.857305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.857524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.857750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.857797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.858011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.858206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.858238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.858452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.858700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.858726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.858947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.859131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.859158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.859443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.859603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.859629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.859855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.860100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.860126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.860351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.860588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.860615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.860798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.861002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.861031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.861251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.861556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.861619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.861830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.862040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.862069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.862290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.862483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.862509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.862671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.862861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.862889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.863157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.863534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.863599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.863836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.864053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.864082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.864297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.864537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.864589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.864810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.864988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.865016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.865229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.865424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.865451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.865645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.865857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.865886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.866092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.866256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.703 [2024-05-15 02:39:11.866282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.703 qpair failed and we were unable to recover it. 00:22:24.703 [2024-05-15 02:39:11.866472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.866691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.866718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.866952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.867208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.867236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.867485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.867729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.867757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.867993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.868208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.868236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.868455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.868768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.868825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.869069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.869238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.869264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.869452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.869658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.869685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.869875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.870126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.870152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.870348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.870535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.870563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.870810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.871026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.871055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.871233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.871506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.871555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.871801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.872063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.872092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.872320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.872540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.872568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.872859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.873099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.873128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.873330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.873498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.873522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.873715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.873952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.873981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.874166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.874477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.874523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.874759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.874983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.875012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.875209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.875429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.875454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.875672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.875851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.875879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.876109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.876290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.876315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.876479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.876667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.876692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.876881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.877075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.877101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.704 qpair failed and we were unable to recover it. 00:22:24.704 [2024-05-15 02:39:11.877311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.704 [2024-05-15 02:39:11.877559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.877587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.877815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.878028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.878056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.878283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.878598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.878652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.878862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.879081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.879107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.879274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.879489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.879535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.879761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.879979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.880005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.880192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.880398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.880427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.880646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.880844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.880876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.881068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.881259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.881289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.881478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.881667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.881695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.881904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.882133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.882158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.882358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.882548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.882594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.882803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.882994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.883024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.883266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.883483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.883511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.883690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.883871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.883905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.884160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.884336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.884362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.884536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.884728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.884752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.884945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.885129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.885154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.885349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.885563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.885590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.885828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.886000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.886026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.886210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.886553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.886605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.886821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.887065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.887096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.887312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.887533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.887561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.887806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.887999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.888025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.888226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.888390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.888433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.888671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.888895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.888921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.889131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.889306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.889331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.889505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.889664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.889688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.889883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.890057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.890084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.890275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.890507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.890536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.705 qpair failed and we were unable to recover it. 00:22:24.705 [2024-05-15 02:39:11.890787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.705 [2024-05-15 02:39:11.890993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.891019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.891212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.891439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.891468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.891702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.891947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.891975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.892168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.892364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.892391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.892588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.892781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.892806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.892971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.893139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.893164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.893395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.893638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.893667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.893857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.894034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.894060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.894250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.894483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.894547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.894781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.895009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.895035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.895200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.895425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.895465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.895699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.895883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.895907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.896104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.896306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.896331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.896522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.896736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.896765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.896959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.897140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.897166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.897365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.897573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.897598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.897755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.897978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.898007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.898172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.898401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.898427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.898646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.898888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.898913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.899117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.899292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.899317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.899508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.899693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.899718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.899909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.900087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.900113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.900343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.900550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.900580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.900818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.901004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.901029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.901228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.901443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.901471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.901680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.901882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.901907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.902098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.902309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.902338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.902537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.902736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.902761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.902919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.903107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.903132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.903301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.903474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.903502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.706 qpair failed and we were unable to recover it. 00:22:24.706 [2024-05-15 02:39:11.903691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.706 [2024-05-15 02:39:11.903947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.903973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.904150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.904366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.904394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.904599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.904815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.904840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.905036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.905192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.905222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.905376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.905558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.905586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.905827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.906020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.906046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.906267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.906530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.906555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.906753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.906980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.907006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.907170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.907404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.907429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.907627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.907798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.907823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.908010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.908180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.908206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.908430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.908700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.908747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.908980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.909172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.909212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.909439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.909717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.909764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.909993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.910177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.910208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.910439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.910669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.910698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.910890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.911063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.911089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.911251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.911440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.911467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.911676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.911885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.911910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.912105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.912354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.912382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.912600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.912791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.912816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.912988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.913146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.913171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.913349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.913589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.913621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.913803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.913963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.913989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.914206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.914444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.914472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.914673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.914888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.914915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.915111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.915328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.915375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.915596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.915819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.915846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.916061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.916236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.916261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.707 qpair failed and we were unable to recover it. 00:22:24.707 [2024-05-15 02:39:11.916495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.707 [2024-05-15 02:39:11.916702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.916730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.916946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.917104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.917129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.917343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.917659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.917717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.917958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.918165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.918190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.918364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.918548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.918590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.918798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.919026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.919051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.919246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.919405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.919429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.919652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.919871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.919897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.920099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.920378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.920403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.920596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.920782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.920811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.921017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.921256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.921301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.921548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.921773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.921801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.921986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.922149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.922174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.922366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.922582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.922628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.922872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.923057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.923082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.923276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.923516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.923560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.923772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.923963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.923988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.924198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.924462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.924494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.924734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.924978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.925007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.925216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.925500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.925558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.925745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.925959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.925989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.926238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.926478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.926505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.926687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.926898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.926927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.927151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.927356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.927381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.927602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.927850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.927895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.708 qpair failed and we were unable to recover it. 00:22:24.708 [2024-05-15 02:39:11.928086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.928320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.708 [2024-05-15 02:39:11.928366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.928598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.928815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.928841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.929634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.929866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.929895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.930130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.930410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.930456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.930664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.930847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.930875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.931095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.931292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.931319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.931546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.931772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.931798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.931973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.932159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.932185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.932532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.932789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.932833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.933077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.933265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.933295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.933486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.933733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.933758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.933980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.934163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.934191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.934418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.934639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.934668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.934864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.935100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.935142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.935354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.935619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.935663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.935896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.936090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.936119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.936338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.936512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.936539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.936803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.937029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.937059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.937273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.937480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.937508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.937721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.937895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.937928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.938129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.938313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.938341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.938564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.938852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.938901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.939164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.939401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.939427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.939621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.939875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.939900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.940077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.940293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.940318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.940539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.940757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.940787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.941042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.941240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.941279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.941502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.941753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.941781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.942013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.942234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.942262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.942453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.942647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.942672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.709 qpair failed and we were unable to recover it. 00:22:24.709 [2024-05-15 02:39:11.942967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.709 [2024-05-15 02:39:11.943144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.943169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.943369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.943598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.943626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.943838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.944064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.944090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.944264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.944509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.944536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.944742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.944963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.944990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.945186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.945365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.945393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.945605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.945815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.945842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.946040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.946266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.946294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.946532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.946717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.946744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.946946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.947143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.947168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.947358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.947554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.947579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.947767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.947965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.947991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.948155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.948353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.948378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.948601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.948824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.948852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.949053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.949255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.949281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.949446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.949646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.949670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.949860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.950039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.950065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.950234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.950482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.950510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.950724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.950961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.950987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.951157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.951355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.951381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.951548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.951715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.951741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.951896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.952063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.952088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.952271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.952513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.952559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.952741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.952904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.952937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.953107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.953270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.953295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.953463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.953659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.953685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.953908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.954075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.954100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.954277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.954435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.954460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.954652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.954820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.954844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.955015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.955208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.955238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.710 qpair failed and we were unable to recover it. 00:22:24.710 [2024-05-15 02:39:11.955454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.710 [2024-05-15 02:39:11.955645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.955674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.955900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.956110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.956154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.956339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.956564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.956589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.956802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.957053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.957079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.957272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.957513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.957540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.957767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.957971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.957997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.958170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.958364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.958392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.958566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.958772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.958800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.959013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.959205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.959233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.959437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.959634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.959659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.959819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.960034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.960064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.960285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.960529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.960575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.960779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.960949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.960984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.961162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.961334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.961359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.961544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.961719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.961747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.961938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.962113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.962140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.962431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.962777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.962830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.963031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.963199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.963225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.963435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.963649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.963678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.963868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.964063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.964090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.964267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.964505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.964547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.964762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.964936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.964961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.965178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.965396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.965423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.965663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.965858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.965884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.966092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.966303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.966349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.966587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.966791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.966817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.966986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.967243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.967271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.967511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.967682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.967708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.967907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.968091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.968116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.968325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.968518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-05-15 02:39:11.968546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.711 qpair failed and we were unable to recover it. 00:22:24.711 [2024-05-15 02:39:11.968782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.968992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.969019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.969198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.969470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.969497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.969656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.969869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.969893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.970077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.970254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.970278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.970474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.970664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.970706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.970895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.971115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.971143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.971368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.971601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.971629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.971822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.972036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.972065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.972333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.972534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.972561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.972823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.973011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.973037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.973290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.973528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.973582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.973815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.973983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.974009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.974215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.974378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.974417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.974612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.974831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.974860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.975093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.975315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.975343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.975527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.975725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.975750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.975973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.976234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.976266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.976493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.976698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.976738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.976950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.977184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.977212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.977446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.977665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.977691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.977983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.978159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.978187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.978368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.978614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.978642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.978855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.979028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.979054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.979280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.979505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.979529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.979804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.980046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.980073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.980247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.980461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.980508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.980727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.980986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.981012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.981187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.981407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.981434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.981649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.981842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.981868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.712 qpair failed and we were unable to recover it. 00:22:24.712 [2024-05-15 02:39:11.982069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.982227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.712 [2024-05-15 02:39:11.982252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.982446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.982693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.982724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.982946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.983118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.983148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.983369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.983618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.983647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.983886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.984107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.984133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.984350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.984599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.984654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.984873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.985059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.985086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.985256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.985503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.985548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.985776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.985947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.985973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.986195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.986436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.986483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.986694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.986884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.986911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.987113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.987331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.987360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.987604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.987833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.987865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.988076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.988294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.988320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.988515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.988730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.988760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.988974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.989180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.989206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.989394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.989561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.989586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.989809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.989995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.990021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.990204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.990372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.990398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.990596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.990755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.990780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.990974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.991158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.991184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.991403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.991627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.991654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.991883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.992059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.992085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.992294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.992463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.992503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.992711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.992961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.992997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.993192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.993412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.993440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.713 qpair failed and we were unable to recover it. 00:22:24.713 [2024-05-15 02:39:11.993691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.713 [2024-05-15 02:39:11.993950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.993976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.994132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.994361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.994386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.994600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.994782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.994807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.995022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.995197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.995223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.995388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.995609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.995634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.995889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.996073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.996100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.996294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.996528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.996576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.996773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.997033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.997059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.997227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.997389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.997414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.997602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.997818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.997846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.998044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.998232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.998260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.998501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.998818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.998864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.999113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.999323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.999348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:11.999573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.999838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:11.999881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.000075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.000273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.000299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.000502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.000713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.000741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.000982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.001153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.001180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.001407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.001643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.001674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.001894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.002117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.002142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.002375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.002565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.002593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.002808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.003062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.003088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.003284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.004366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.004401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.004636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.004878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.004907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.005066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224b0b0 is same with the state(5) to be set 00:22:24.714 [2024-05-15 02:39:12.005286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.005519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.005566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.005772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.006022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.006049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.006209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.006403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.006429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.006653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.006862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.006887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.007090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.007331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.007366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.007590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.007787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.007813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.714 qpair failed and we were unable to recover it. 00:22:24.714 [2024-05-15 02:39:12.008034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.008245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.714 [2024-05-15 02:39:12.008273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.008514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.008725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.008752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.008981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.009157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.009182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.009410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.009642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.009686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.009884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.010083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.010111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.010308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.010615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.010664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.010885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.011055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.011081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.011266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.011514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.011556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.011782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.011954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.011982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.012176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.012400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.012428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.012654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.012845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.012870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.013065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.013305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.013333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.013560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.013745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.013771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.013947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.014144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.014168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.014380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.014621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.014664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.014893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.015061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.015088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.015303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.015535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.015578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.015773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.015961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.015987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.016184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.016406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.016449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.016639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.016858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.016883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.017042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.017238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.017265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.017483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.017666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.017694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.017861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.018081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.018108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.018296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.018529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.018580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.018796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.018999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.019025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.019251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.019456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.019498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.019729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.019923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.019958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.020153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.020400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.020442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.020642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.020855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.020883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.715 [2024-05-15 02:39:12.021053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.021219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.715 [2024-05-15 02:39:12.021245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.715 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.021459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.021742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.021785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.022018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.022227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.022255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.022519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.022706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.022731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.022958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.023123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.023148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.023418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.023676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.023718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.023922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.024123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.024147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.024383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.024612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.024654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.024890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.025119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.025145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.025367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.025612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.025654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.025884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.026052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.026079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.026338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.026660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.026704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.026901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.027158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.027184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.027386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.027623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.027666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.027860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.028029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.028056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.028231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.028452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.028477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.028673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.028878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.028904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.029114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.029342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.029384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.029639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.029830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.029856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.030033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.030204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.030232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.030428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.030610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.030651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.030814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.030991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.031032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.031294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.031504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.031548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.031745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.031940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.031967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.032218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.032446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.032474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.032710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.032920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.032953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.033173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.033412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.033461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.033661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.033870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.033896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.034114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.034337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.034381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.034627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.034845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.034870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.716 qpair failed and we were unable to recover it. 00:22:24.716 [2024-05-15 02:39:12.035136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.716 [2024-05-15 02:39:12.035341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.035382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.035587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.035784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.035811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.036029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.036271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.036315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.036512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.036783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.036826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.037052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.037297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.037341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.037549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.037736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.037762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.037934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.038118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.038143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.038345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.038617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.038660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.038878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.039072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.039099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.039295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.039531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.039581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.039837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.040095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.040122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.040388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.040620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.040663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.040859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.041079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.041132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.041392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.041693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.041739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.041967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.042162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.042188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.042412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.042652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.042695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.042901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.043133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.043159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.043384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.043589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.043632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.043790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.043987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.044013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.044212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.044428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.044475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.044685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.044891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.044916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.045111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.045357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.045400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.045584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.045790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.045815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.046005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.046198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.046225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.046448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.046642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.717 [2024-05-15 02:39:12.046668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.717 qpair failed and we were unable to recover it. 00:22:24.717 [2024-05-15 02:39:12.046833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.047076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.047106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.047311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.047524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.047568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.047791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.047955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.047993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.048207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.048414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.048458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.048676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.048884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.048914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.049140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.049460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.049505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.049746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.049921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.049954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.050178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.050434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.050481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.050709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.050879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.050905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.051170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.051378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.051421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.051671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.051902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.051927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.052137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.052375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.052420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.052646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.052849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.052874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.053082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.053369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.053412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.053634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.053879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.053908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.054169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.054453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.054496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.054683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.054922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.054953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.055181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.055482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.055528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.055759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.055983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.056011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.056235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.056474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.056517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.056740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.056954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.056990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.057190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.057376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.057403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.057617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.057850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.057875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.058143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.058358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.058399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.058651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.058889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.058918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.059153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.059420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.059468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.059707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.059912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.059946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.060194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.060411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.060456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.060641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.060850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.060877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.718 [2024-05-15 02:39:12.061115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.061318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.718 [2024-05-15 02:39:12.061360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.718 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.061754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.061779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.061988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.062224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.062266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.062486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.062747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.062772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.062970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.063240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.063286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.063507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.063776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.063818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.064066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.064358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.064383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.064561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.064732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.064758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.064953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.065181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.065224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.065438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.065689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.065715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.065914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.066112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.066138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.066343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.066544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.066587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.066755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.066983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.067010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.067232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.067474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.067501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.067692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.067906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.067939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.068129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.068353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.068397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.068625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.068805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.068831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.069050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.069231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.069258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.069451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.069636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.069663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.069851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.070068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.070112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.070338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.070623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.070686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.070908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.071122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.071149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.071371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.071638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.071680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.071894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.072096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.072122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.072347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.072589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.072616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.072840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.073024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.073068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.073329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.073577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.073624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.073824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.074031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.074084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.074335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.074657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.074703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.074905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.075108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.075134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.719 qpair failed and we were unable to recover it. 00:22:24.719 [2024-05-15 02:39:12.075330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.719 [2024-05-15 02:39:12.075607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.075650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.075811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.076002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.076028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.076263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.076509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.076538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.076773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.077032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.077076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.077336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.077638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.077684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.077870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.078067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.078093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.078259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.078477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.078521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.078698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.078889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.078915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.079123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.079331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.079375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.079573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.079781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.079806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.079999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.080186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.080220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.080430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.080695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.080738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.080940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.081158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.081184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.081392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.081643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.081686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.081910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.082137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.082163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.082358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.082663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.082709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.082936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.083159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.083185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.083417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.083683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.083729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.083924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.084157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.084183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.084401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.084693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.084741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.084902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.085145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.085190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.085407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.085639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.085682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.085854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.086045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.086090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.086310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.086554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.086596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.086791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.086996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.087025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.087237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.087468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.087510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.091147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.091425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.091455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.091622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.091851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.091878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.092055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.092240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.092282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.092532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.092732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.720 [2024-05-15 02:39:12.092774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.720 qpair failed and we were unable to recover it. 00:22:24.720 [2024-05-15 02:39:12.092951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.093149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.093175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.093425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.093651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.093677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.093899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.094102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.094129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.094323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.094549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.094590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.094793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.095032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.095078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.095296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.095474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.095502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.095704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.095909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.095940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.096166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.096369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.096414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.096672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.096863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.096889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.097145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.097417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.097443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.097704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.097892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.097917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.098236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.098488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.098533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.098764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.099019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.099066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.099318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.099558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.099587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.099765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.099968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.099995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.100190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.100426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.100470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.721 [2024-05-15 02:39:12.100713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.100947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.721 [2024-05-15 02:39:12.100974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.721 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.101158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.101354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.101382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.101599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.101824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.101849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.102033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.102268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.102311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.102557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.102737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.102764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.102972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.103192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.103238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.103492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.103729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.103772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.103992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.104218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.104261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.104485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.104661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.104688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.104883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.105123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.105166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.105382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.105692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.105734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.105901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.106108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.106134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.106355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.106616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.106658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.106881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.107078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.107105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.992 qpair failed and we were unable to recover it. 00:22:24.992 [2024-05-15 02:39:12.107299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.992 [2024-05-15 02:39:12.107576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.107618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.107789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.108007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.108053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.108331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.108603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.108645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.108841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.109031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.109076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.109336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.109711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.109758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.109989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.110201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.110243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.110458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.110668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.110711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.110904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.111110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.111138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.111327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.111562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.111604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.111791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.112030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.112074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.112287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.112493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.112535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.112768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.112963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.112991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.113210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.113483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.113525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.113765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.113927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.113971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.114231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.114491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.114518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.114745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.114981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.115026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.115224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.115463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.115506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.115686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.115900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.115927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.116125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.116325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.116368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.116584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.116765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.116790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.116993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.117233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.117276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.117500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.117710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.117735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.117937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.118195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.118238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.118442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.118672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.118715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.118920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.119118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.119161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.119363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.119632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.119675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.119867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.120081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.120133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.993 [2024-05-15 02:39:12.120352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.120568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.993 [2024-05-15 02:39:12.120610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.993 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.120800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.121024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.121074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.121302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.121540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.121568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.121777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.121948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.121979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.122206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.122469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.122512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.122765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.122948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.122986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.123199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.123434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.123478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.123677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.123870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.123896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.124091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.124398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.124449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.124699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.124914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.124959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.125137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.125311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.125338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.125562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.125749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.125775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.125974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.126177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.126221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.126481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.126713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.126756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.126949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.127179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.127224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.127458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.127654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.127679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.127870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.128049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.128075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.128338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.128547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.128591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.128789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.128987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.129015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.129266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.129495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.129541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.129760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.129971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.129998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.130205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.130449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.130492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.130680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.130918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.130952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.131152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.131377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.131430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.131716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.131939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.131965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.132187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.132381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.132425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.994 qpair failed and we were unable to recover it. 00:22:24.994 [2024-05-15 02:39:12.132674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.132910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.994 [2024-05-15 02:39:12.132948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.133173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.133367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.133411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.133629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.133843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.133869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.134060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.134307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.134354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.134588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.134787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.134813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.135032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.135267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.135295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.135532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.135797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.135839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.136064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.136275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.136318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.136480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.136653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.136680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.136900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.137116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.137163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.137411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.137782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.137828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.138092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.138343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.138388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.138600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.138811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.138836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.139094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.139312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.139355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.139586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.139793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.139820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.140030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.140269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.140312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.140537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.140772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.140798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.141008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.141250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.141292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.141509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.141748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.141791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.142012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.142262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.142305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.142541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.142783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.142809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.143068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.143357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.143401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.143663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.143874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.143900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.144073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.144354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.144396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.144620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.144825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.144850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.145056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.145274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.145318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.995 [2024-05-15 02:39:12.145504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.145691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.995 [2024-05-15 02:39:12.145717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.995 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.145913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.146144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.146194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.146425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.146698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.146741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.146943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.147108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.147133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.147400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.147682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.147739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.147960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.148156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.148182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.148395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.148594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.148622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.148826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.149101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.149127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.149348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.149616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.149659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.149854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.150091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.150135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.150385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.150648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.150690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.150908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.151082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.151108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.151327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.151593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.151635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.151850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.152041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.152068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.152281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.152488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.152530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.152755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.152981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.153010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.153256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.153443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.153470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.153694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.153927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.153961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.154185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.154421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.154465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.154717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.154941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.154967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.155135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.155350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.155392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.155629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.155815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.155839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.156005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.156217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.156259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.156452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.156677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.156722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.156913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.157114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.157144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.157369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.157608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.157637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.157853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.158046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.158072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.996 qpair failed and we were unable to recover it. 00:22:24.996 [2024-05-15 02:39:12.158326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.996 [2024-05-15 02:39:12.158564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.158593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.158807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.159097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.159143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.159370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.159574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.159616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.159786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.160001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.160046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.160259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.160503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.160530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.160729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.160919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.160957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.161151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.161384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.161427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.161620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.161853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.161878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.162099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.162334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.162377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.162608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.162815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.162841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.163089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.163362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.163405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.163662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.163892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.163917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.164150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.164331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.164358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.164574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.164775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.164800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.164992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.165267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.165310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.165537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.165779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.165805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.166004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.166226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.166268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.166492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.166782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.166808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.167047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.167281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.997 [2024-05-15 02:39:12.167323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.997 qpair failed and we were unable to recover it. 00:22:24.997 [2024-05-15 02:39:12.167540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.167753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.167779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.167947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.168166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.168211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.168471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.168735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.168777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.168975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.169199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.169243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.169501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.169729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.169779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.169978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.170225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.170270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.170491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.170799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.170841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.171062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.171302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.171346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.171537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.171763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.171807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.172058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.172269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.172320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.172541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.172785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.172810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.172998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.173232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.173275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.173567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.173786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.173812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.173989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.174228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.174277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.174498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.174886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.174944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.175210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.175416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.175459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.175654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.175866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.175892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.176092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.176322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.176364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.176619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.176850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.176876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.177098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.177369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.177398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.177688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.177899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.177925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.178122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.178361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.178391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.178607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.178906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.178938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.179133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.179355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.179397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.179620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.179852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.179877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.998 qpair failed and we were unable to recover it. 00:22:24.998 [2024-05-15 02:39:12.180083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.998 [2024-05-15 02:39:12.180312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.180355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.180571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.180775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.180802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.181015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.181229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.181258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.181479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.181698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.181724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.181914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.182125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.182169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.182423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.182858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.182916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.183143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.183475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.183542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.183766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.183947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.183978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.184218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.184499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.184545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.184759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.184961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.184989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.185209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.185488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.185530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.185781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.186003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.186030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.186233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.186498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.186542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.186792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.187051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.187095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.187289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.187592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.187634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.187854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.188028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.188054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.188268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.188507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.188549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.188769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.189016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.189059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.189273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.189486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.189528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.189745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.189953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.189986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.190248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.190596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.190638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.190827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.191024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.191052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.191270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.191716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.191771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:24.999 [2024-05-15 02:39:12.191983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.192182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.999 [2024-05-15 02:39:12.192207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:24.999 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.192404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.192639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.192684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.192885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.193075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.193101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.193321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.193559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.193603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.193798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.193971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.193998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.194217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.194419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.194462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.194677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.194881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.194907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.195161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.195390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.195433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.195685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.195912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.195943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.196141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.196439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.196481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.196730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.196907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.196945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.197136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.197334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.197378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.197571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.197757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.197783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.198073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.198309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.198353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.198583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.198824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.198853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.199074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.199315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.199343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.199575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.199773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.199798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.200044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.200318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.200367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.200618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.200825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.200852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.201071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.201283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.201326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.201555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.201772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.201798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.201983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.202188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.202232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.000 [2024-05-15 02:39:12.202425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.202659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.000 [2024-05-15 02:39:12.202702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.000 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.202924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.203162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.203210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.203463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.203692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.203739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.203909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.204123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.204150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.204398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.204624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.204670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.204840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.205069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.205112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.205345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.205553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.205596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.205787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.205994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.206024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.206253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.206502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.206544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.206777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.206998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.207024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.207212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.207472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.207516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.207731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.207974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.208001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.208224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.208443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.208490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.208741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.208926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.208958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.209149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.209418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.209470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.209701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.209881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.209908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.210145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.210383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.210643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.210852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.210877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.211100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.211467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.211529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.211778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.212017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.212046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.212283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.212557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.212601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.212796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.212961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.212987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.213235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.213527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.213580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.213804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.214014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.214043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.214294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.214609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.214652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.214864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.215074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.215118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.215341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.215603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.215649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.001 qpair failed and we were unable to recover it. 00:22:25.001 [2024-05-15 02:39:12.215841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.001 [2024-05-15 02:39:12.216015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.216041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.216268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.216469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.216512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.216700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.216908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.216943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.217146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.217381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.217423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.217642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.217852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.217877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.218098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.218344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.218387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.218642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.218856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.218881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.219108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.219387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.219453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.219670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.219907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.219937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.220198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.220405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.220449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.220699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.220885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.220910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.221153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.221367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.221398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.221645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.222016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.222043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.222288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.222495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.222524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.222948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.223177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.223202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.223421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.223618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.223647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.223847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.224055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.224082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.224277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.224516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.224544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.224744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.224995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.225021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.002 [2024-05-15 02:39:12.225260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.225484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.002 [2024-05-15 02:39:12.225511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.002 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.225867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.226137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.226164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.226361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.226526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.226571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.226785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.227005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.227032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.227201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.227416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.227444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.227692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.227895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.227919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.228127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.228332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.228359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.228555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.228795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.228823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.229046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.229240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.229265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.229515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.229772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.229821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.230046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.230266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.230294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.230487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.230707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.230735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.230911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.231128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.231155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.231355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.231563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.231591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.231825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.232026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.232053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.232215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.232401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.232425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.232616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.232949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.233017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.233260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.233504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.233535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.233820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.234071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.234097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.234318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.234504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.234537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.234719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.234937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.234970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.235180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.235422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.235469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.235690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.235852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.235877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.236065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.236286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.236315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.236617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.236833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.236860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.237047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.237245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.237270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.237463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.237760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.237784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.003 qpair failed and we were unable to recover it. 00:22:25.003 [2024-05-15 02:39:12.237977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.003 [2024-05-15 02:39:12.238171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.238199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.238426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.238823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.238875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.239103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.239337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.239387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.239714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.239972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.240015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.240239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.240522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.240573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.240778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.241009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.241036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.241210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.241467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.241514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.241732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.241941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.241969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.242179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.242416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.242444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.242800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.243088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.243114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.243315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.243535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.243562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.243779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.243971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.243997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.244169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.244515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.244569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.244876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.245147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.245174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.245384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.245590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.245618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.245847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.246011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.246039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.246245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.246446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.246470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.246688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.246983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.247009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.247204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.247467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.247496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.247707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.247906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.247938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.248132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.248411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.248464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.248708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.248924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.248959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.249160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.249343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.249368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.249586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.249795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.249823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.250065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.250309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.250337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.250593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.250833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.250860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.004 [2024-05-15 02:39:12.251074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.251267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.004 [2024-05-15 02:39:12.251291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.004 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.251483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.251864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.251918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.252145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.252490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.252538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.252762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.253022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.253048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.253221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.253578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.253606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.253835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.254007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.254034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.254232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.254480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.254531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.254739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.254955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.254994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.255243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.255469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.255519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.255758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.255984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.256013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.256250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.256470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.256516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.256750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.256971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.257014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.257240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.257453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.257481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.257723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.257941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.257969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.258196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.258445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.258488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.258703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.258885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.258915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.259149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.259437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.259461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.259624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.259861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.259885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.260107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.260297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.260325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.260548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.260763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.260808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.261041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.261228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.261257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.261434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.261720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.261775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.262008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.262251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.262290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.262468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.262772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.262826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.005 qpair failed and we were unable to recover it. 00:22:25.005 [2024-05-15 02:39:12.263069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.263252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.005 [2024-05-15 02:39:12.263280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.263501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.263680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.263708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.263925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.264133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.264160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.264408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.264716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.264768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.264983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.265171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.265199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.265413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.265643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.265693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.265943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.266161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.266191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.266400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.266685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.266715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.267010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.267243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.267282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.267524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.267789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.267836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.268079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.268433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.268483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.268668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.268864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.268892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.269134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.269360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.269384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.269642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.269870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.269895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.270073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.270293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.270318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.270546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.270785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.270829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.271085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.271302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.271327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.271537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.271838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.271865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.272067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.272226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.272267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.272492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.272701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.272726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.272902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.273151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.273180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.273400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.273579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.273612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.273850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.274065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.274095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.274307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.274529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.274557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.006 qpair failed and we were unable to recover it. 00:22:25.006 [2024-05-15 02:39:12.274802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.006 [2024-05-15 02:39:12.275011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.275040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.275294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.275554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.275595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.275875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.276139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.276172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.276399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.276704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.276760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.276976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.277189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.277217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.277437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.277626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.277650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.277879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.278101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.278129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.278345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.278549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.278573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.278800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.279046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.279075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.279266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.279506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.279531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.279781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.279978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.280007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.280261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.280523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.280553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.280793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.281008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.281036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.281278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.281514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.281542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.281794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.282067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.282093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.282297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.282499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.282526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.282742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.282950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.282975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.283154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.283322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.283347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.283613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.283831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.283860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.284075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.284454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.284509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.284738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.284950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.284979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.285188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.285421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.285449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.285699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.285935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.285974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.286163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.286361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.286400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.286653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.286949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.286976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.287196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.287420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.007 [2024-05-15 02:39:12.287448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.007 qpair failed and we were unable to recover it. 00:22:25.007 [2024-05-15 02:39:12.287687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.287875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.287906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.288162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.288390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.288416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.288662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.288905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.288953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.289170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.289409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.289437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.289678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.289859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.289887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.290108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.290288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.290316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.290530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.290731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.290757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.291001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.291253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.291281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.291519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.291842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.291894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.292143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.292356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.292383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.292596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.292779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.292804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.292997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.293196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.293221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.293417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.293647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.293686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.293917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.294133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.294158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.294384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.294661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.294706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.295015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.295247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.295276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.295463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.295779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.295831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.296044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.296319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.296370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.296623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.296841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.296869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.297090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.297346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.008 [2024-05-15 02:39:12.297392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.008 qpair failed and we were unable to recover it. 00:22:25.008 [2024-05-15 02:39:12.297640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.297854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.297882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.298066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.298249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.298277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.298480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.298793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.298843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.299057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.299366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.299417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.299628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.299810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.299838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.300019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.300205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.300233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.300452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.300653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.300678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.300956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.301158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.301188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.301404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.301719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.301783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.302014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.302257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.302306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.302575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.302791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.302822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.303034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.303241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.303266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.303510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.303720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.303755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.303940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.304192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.304221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.304447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.304660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.304688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.304895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.305085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.305113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.305306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.305487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.305517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.305696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.305896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.305924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.306169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.306385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.306412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.306677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.306896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.306923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.307172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.307456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.307505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.307680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.307862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.307889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.308102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.308283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.308311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.308503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.308709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.308733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.308981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.309221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.309249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.309459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.309754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.009 [2024-05-15 02:39:12.309782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.009 qpair failed and we were unable to recover it. 00:22:25.009 [2024-05-15 02:39:12.309995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.310221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.310247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.310490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.310661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.310688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.310891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.311097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.311122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.311358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.311581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.311607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.311826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.312081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.312106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.312334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.312538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.312565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.312771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.312994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.313021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.313187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.313426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.313454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.313651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.313968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.313997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.314227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.314449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.314474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.314742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.314917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.314959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.315206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.315508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.315540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.315726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.315945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.315974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.316169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.316427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.316455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.316742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.316985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.317014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.317203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.317416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.317444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.317683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.317920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.317959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.318186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.318436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.318465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.318712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.318950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.318978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.319217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.319488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.319515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.319721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.319964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.320003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.320183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.320415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.320440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.320703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.320918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.320954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.321169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.321337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.321365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.321615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.321799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.321823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.010 [2024-05-15 02:39:12.322019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.322235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.010 [2024-05-15 02:39:12.322263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.010 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.322525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.322774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.322821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.323062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.323314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.323372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.323586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.323774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.323804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.324070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.324399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.324451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.324642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.324849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.324876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.325094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.325348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.325386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.325589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.325803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.325845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.326062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.326257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.326282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.326448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.326662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.326686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.326902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.327137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.327164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.327364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.327552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.327589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.327795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.328004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.328040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.328296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.328551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.328580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.328800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.329036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.329064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.329328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.329731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.329779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.330033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.330221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.330245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.330453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.330673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.330701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.330919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.331145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.331171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.331368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.331618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.331671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.331864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.332122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.332156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.332386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.332625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.332649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.332813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.333014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.333039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.333264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.333523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.333565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.011 qpair failed and we were unable to recover it. 00:22:25.011 [2024-05-15 02:39:12.333802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.334026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.011 [2024-05-15 02:39:12.334056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.334312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.334559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.334605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.334818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.335033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.335061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.335304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.335518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.335546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.335760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.335945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.335974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.336165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.336370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.336398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.336602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.336838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.336864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.337091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.337384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.337434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.337678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.337877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.337904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.338128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.338518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.338580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.338791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.339044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.339076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.339273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.339627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.339675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.339886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.340121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.340147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.340327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.340533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.340561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.340784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.341039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.341064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.341241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.341466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.341491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.341734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.341936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.341965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.342160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.342448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.342494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.342715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.342963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.342990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.343167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.343392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.343434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.343607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.343822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.343850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.344072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.344260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.344285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.344477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.344702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.344728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.344927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.345122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.345147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.012 qpair failed and we were unable to recover it. 00:22:25.012 [2024-05-15 02:39:12.345373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.012 [2024-05-15 02:39:12.345736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.345788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.346007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.346209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.346234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.346460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.346709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.346738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.346950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.347172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.347200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.347428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.347729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.347781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.347993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.348191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.348219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.348459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.348775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.348827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.349062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.349405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.349462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.349697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.349877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.349905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.350136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.350329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.350359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.350581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.350833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.350883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.351127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.351480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.351532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.351782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.352024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.352054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.352283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.352499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.352523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.352760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.352977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.353003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.353170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.353357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.353389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.353566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.353900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.353956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.354169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.354502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.354552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.013 [2024-05-15 02:39:12.354808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.355020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.013 [2024-05-15 02:39:12.355045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.013 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.355269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.355531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.355570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.355791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.356005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.356034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.356219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.356560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.356620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.356842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.357095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.357123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.357380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.357591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.357618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.357830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.358049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.358078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.358284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.358456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.358481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.358673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.358922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.358965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.359178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.359390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.359417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.359626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.359878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.359905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.360119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.360402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.360453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.360694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.360880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.360908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.361135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.361347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.361375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.361584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.361768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.361795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.362032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.362212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.362240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.362483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.362670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.362694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.362863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.363068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.363096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.363316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.363525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.363555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.363752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.363987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.364013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.364227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.364527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.364579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.364789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.365016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.365044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.365267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.365514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.365541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.365765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.365984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.366010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.366226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.366442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.366466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.014 [2024-05-15 02:39:12.366676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.366848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.014 [2024-05-15 02:39:12.366875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.014 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.367117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.367306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.367333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.367538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.367776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.367804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.368025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.368248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.368278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.368486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.368679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.368706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.368925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.369169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.369197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.369444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.369690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.369717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.369942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.370155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.370182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.370390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.370746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.370799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.371019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.371264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.371293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.371586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.371761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.371785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.371977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.372145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.372171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.372411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.372594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.372623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.372835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.373057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.373086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.373278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.373486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.373514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.373721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.373969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.373997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.374246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.374433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.374463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.374707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.374942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.374971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.375213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.375431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.375459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.375632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.375848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.375876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.376107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.376300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.376326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.376602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.376822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.376845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.377050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.377259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.377287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.377502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.377742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.377776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.378009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.378176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.378208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.015 qpair failed and we were unable to recover it. 00:22:25.015 [2024-05-15 02:39:12.378451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.378794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.015 [2024-05-15 02:39:12.378818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.379046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.379280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.379307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.379510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.379804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.379869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.380072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.380289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.380316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.380491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.380698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.380726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.380900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.381117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.381142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.381369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.381686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.381726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.381975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.382182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.382210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.382392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.382754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.382809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.383050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.383284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.383338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.383591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.383810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.383862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.384126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.384364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.384389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.384628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.384869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.384897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.385122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.385311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.385335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.385526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.385780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.385831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.386051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.386289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.386337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.386546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.386781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.386810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.387025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.387245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.387273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.387509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.387833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.387892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.388150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.388368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.388397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.388593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.388809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.388837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.389047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.389261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.389289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.389502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.389749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.389776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.390012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.390218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.390243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.390427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.390610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.390637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.390856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.391106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.391133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.016 qpair failed and we were unable to recover it. 00:22:25.016 [2024-05-15 02:39:12.391326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.016 [2024-05-15 02:39:12.391577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.391601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.017 qpair failed and we were unable to recover it. 00:22:25.017 [2024-05-15 02:39:12.391832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.392065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.392092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.017 qpair failed and we were unable to recover it. 00:22:25.017 [2024-05-15 02:39:12.392310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.392508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.392536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.017 qpair failed and we were unable to recover it. 00:22:25.017 [2024-05-15 02:39:12.392757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.392983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.393012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.017 qpair failed and we were unable to recover it. 00:22:25.017 [2024-05-15 02:39:12.393237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.393411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.393439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.017 qpair failed and we were unable to recover it. 00:22:25.017 [2024-05-15 02:39:12.393657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.393878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.017 [2024-05-15 02:39:12.393905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.017 qpair failed and we were unable to recover it. 00:22:25.017 [2024-05-15 02:39:12.394103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.394309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.394348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.394526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.394742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.394799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.395095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.395370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.395423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.395669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.395910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.395944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.396187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.396449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.396475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.396644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.396869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.396897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.397166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.397446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.397489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.397728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.397968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.397995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.398194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.398489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.398541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.398819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.399041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.399066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.399248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.399493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.399532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.399767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.399973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.400001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.400188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.400497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.400525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.400739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.400947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.289 [2024-05-15 02:39:12.400976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.289 qpair failed and we were unable to recover it. 00:22:25.289 [2024-05-15 02:39:12.401225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.401414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.401439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.401633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.401842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.401870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.402079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.402309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.402337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.402524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.402733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.402761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.402999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.403212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.403242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.403460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.403706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.403734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.403954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.404161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.404189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.404436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.404755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.404806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.405018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.405211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.405239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.405460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.405783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.405831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.406093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.406306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.406333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.406593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.406923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.406983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.407195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.407409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.407437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.407645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.407883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.407916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.408135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.408322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.408348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.408508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.408692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.408719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.408895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.409153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.409181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.409372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.409573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.409605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.409830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.410052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.410078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.410282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.410444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.410468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.410713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.410928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.410982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.411151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.411370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.411395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.411626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.411831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.411856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.412041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.412230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.412259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.412473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.412687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.412715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.412911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.413102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.413131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.413367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.413713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.413766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.413981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.414175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.414207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.414467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.414722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.414762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.290 [2024-05-15 02:39:12.414988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.415219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.290 [2024-05-15 02:39:12.415247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.290 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.415491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.415737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.415783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.416015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.416212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.416240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.416474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.416802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.416853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.417065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.417306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.417330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.417549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.417762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.417786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.418021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.418284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.418330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.418582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.418868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.418919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.419148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.419439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.419494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.419684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.419925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.419969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.420161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.420413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.420440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.420673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.420941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.420975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.421183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.421406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.421431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.421645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.421887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.421915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.422143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.422454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.422483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.422723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.422920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.422952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.423154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.423405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.423445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.423658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.424026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.424058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.424287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.424530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.424554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.424798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.424981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.425010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.425248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.425459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.425500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.425739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.425951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.425979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.426215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.426424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.426452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.426672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.426879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.426907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.427118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.427327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.427351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.427571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.427897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.427956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.428176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.428354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.428379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.428573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.428912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.428986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.429229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.429419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.429447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.429777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.430031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.430060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.291 qpair failed and we were unable to recover it. 00:22:25.291 [2024-05-15 02:39:12.430267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.291 [2024-05-15 02:39:12.430654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.430718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.431016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.431276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.431305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.431488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.431724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.431772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.431995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.432202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.432227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.432443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.432638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.432668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.432885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.433118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.433154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.433401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.433620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.433650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.433869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.434104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.434134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.434502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.434727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.434752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.434996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.435215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.435240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.435514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.435723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.435751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.435974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.436232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.436260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.436506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.436691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.436718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.436899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.437111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.437139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.437356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.437596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.437620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.437835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.438061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.438088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.438289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.438516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.438544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.438807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.439009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.439035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.439203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.439442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.439466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.439712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.439924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.439974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.440199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.440439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.440491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.440696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.440942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.440974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.441139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.441349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.441376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.441654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.441866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.441894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.442120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.442340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.442365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.442580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.442876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.442925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.443158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.443398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.443425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.443617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.443847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.443873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.444099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.444263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.444288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.444478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.444667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.444696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.292 qpair failed and we were unable to recover it. 00:22:25.292 [2024-05-15 02:39:12.444913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.292 [2024-05-15 02:39:12.445135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.445164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.445398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.445619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.445644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.445864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.446081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.446108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.446302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.446543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.446567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.446813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.447055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.447084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.447300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.447465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.447490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.447717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.447990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.448016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.448237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.448499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.448527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.448747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.448993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.449022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.449234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.449441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.449465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.449669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.449842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.449869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.450088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.450299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.450324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.450554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.450947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.450986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.451196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.451526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.451584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.451792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.452039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.452068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.452325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.452633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.452661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.452871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.453059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.453087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.453318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.453600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.453649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.453863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.454107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.454132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.454379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.454718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.454772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.455034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.455253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.455280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.455491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.455711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.455737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.455989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.456199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.456229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.456486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.456774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.456802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.457057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.457298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.457329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.293 qpair failed and we were unable to recover it. 00:22:25.293 [2024-05-15 02:39:12.457563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.457808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.293 [2024-05-15 02:39:12.457831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.458089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.458309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.458351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.458580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.458988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.459016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.459252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.459606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.459666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.459968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.460193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.460222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.460444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.460713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.460741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.460976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.461183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.461211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.461403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.461662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.461713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.461926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.462086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.462111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.462325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.462517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.462544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.462763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.462959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.462985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.463146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.463327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.463352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.463600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.463834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.463858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.464051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.464247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.464276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.464510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.464722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.464751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.464948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.465142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.465171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.465400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.465648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.465676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.465917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.466119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.466147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.466365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.466584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.466611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.466852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.467044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.467072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.467246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.467425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.467454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.467669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.467887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.467917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.468179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.468349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.468376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.468597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.468769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.468796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.468989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.469241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.469281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.469502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.469878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.469960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.470151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.470389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.470416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.470602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.470775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.470803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.471017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.471294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.471344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.294 qpair failed and we were unable to recover it. 00:22:25.294 [2024-05-15 02:39:12.471586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.471799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.294 [2024-05-15 02:39:12.471827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.472039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.472252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.472280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.472469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.472681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.472710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.472906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.473140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.473169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.473406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.473615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.473639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.473837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.474051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.474077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.474286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.474568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.474592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.474818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.475058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.475089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.475361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.475613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.475643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.475830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.476043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.476072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.476304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.476474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.476499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.476702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.476919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.476953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.477153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.477361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.477388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.477632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.477864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.477888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.478089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.478409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.478469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.478689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.478873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.478903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.479123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.479336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.479364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.479669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.479913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.479954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.480163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.480381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.480409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.480629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.480807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.480834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.481052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.481212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.481237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.481459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.481632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.481656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.481993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.482019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.482209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.482408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.482443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.482662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.482870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.482898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.483150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.483319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.483343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.483539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.483794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.483826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.484049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.484275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.484304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.484555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.484822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.484847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.485055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.485308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.485336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.295 qpair failed and we were unable to recover it. 00:22:25.295 [2024-05-15 02:39:12.485532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.295 [2024-05-15 02:39:12.485733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.485758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.485995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.486220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.486248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.486488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.486818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.486871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.487121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.487332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.487384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.487633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.487817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.487845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.488072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.488301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.488350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.488530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.488713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.488741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.488967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.489180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.489208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.489440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.489668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.489696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.489917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.490109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.490137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.490331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.490531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.490563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.490795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.491022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.491048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.491225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.491403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.491431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.491667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.491857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.491882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.492065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.492251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.492281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.492546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.492706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.492731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.492953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.493145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.493173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.493385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.493606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.493633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.493843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.494063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.494088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.494301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.494533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.494558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.494725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.494947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.494973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.495169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.495388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.495416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.495627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.495833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.495860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.496051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.496207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.496252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.496474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.496681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.496708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.496956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.497172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.497200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.497425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.497652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.497698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.497888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.498129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.498158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.498375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.498623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.498648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.498868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.499053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.499082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.296 qpair failed and we were unable to recover it. 00:22:25.296 [2024-05-15 02:39:12.499273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.296 [2024-05-15 02:39:12.499552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.499599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.499814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.499997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.500026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.500225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.500383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.500408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.500615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.500827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.500855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.501113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.501302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.501330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.501577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.501899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.501959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.502168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.502385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.502435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.502653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.502840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.502868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.503052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.503248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.503273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.503451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.503790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.503848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.504062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.504255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.504283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.504495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.504754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.504809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.505040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.505265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.505327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.505540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.505721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.505751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.505971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.506169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.506195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.506383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.506620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.506680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.506914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.507132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.507161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.507382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.507560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.507587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.507760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.508000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.508028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.508220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.508529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.508578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.508796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.508990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.509020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.509206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.509427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.509471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.509694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.509927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.509960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.510204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.510468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.510515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.510721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.510938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.510972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.511161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.511398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.511426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.511645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.511838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.511863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.512042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.512292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.512344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.512560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.512789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.512833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.513058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.513220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.297 [2024-05-15 02:39:12.513246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.297 qpair failed and we were unable to recover it. 00:22:25.297 [2024-05-15 02:39:12.513484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.513845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.513893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.514117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.514310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.514335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.514526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.514748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.514797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.515010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.515202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.515247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.515457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.515713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.515761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.515980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.516196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.516224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.516441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.516701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.516726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.516925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.517125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.517150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.517336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.517602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.517652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.517907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.518139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.518170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.518376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.518650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.518696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.518920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.519118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.519146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.519349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.519520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.519549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.519734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.519928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.519965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.520164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.520518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.520562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.520759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.520974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.521002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.521248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.521466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.521494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.521712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.521958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.521989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.522238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.522448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.522497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.522742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.522979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.523008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.523204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.523395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.523419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.523606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.523819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.523844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.524031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.524227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.524254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.524486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.524767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.524820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.525013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.525229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.525256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.298 qpair failed and we were unable to recover it. 00:22:25.298 [2024-05-15 02:39:12.525507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.298 [2024-05-15 02:39:12.525690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.525719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.525958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.526131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.526158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.526373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.526615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.526641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.526851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.527068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.527094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.527290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.527475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.527500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.527720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.527994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.528021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.528229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.528554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.528606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.528844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.529055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.529083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.529268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.529480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.529507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.529691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.529907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.529938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.530164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.530378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.530407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.530612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.530824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.530851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.531093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.531312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.531337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.531537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.531726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.531751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.531995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.532200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.532228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.532444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.532722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.532770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.532979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.533196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.533224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.533436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.533757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.533816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.534009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.534222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.534252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.534442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.534654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.534682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.534902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.535124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.535156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.535381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.535570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.535599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.535812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.535980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.536006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.536168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.536406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.536435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.536649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.536835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.536867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.537090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.537401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.537449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.537693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.537908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.537945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.538163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.538379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.538403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.538589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.538782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.538826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.539048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.539215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.539257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.299 qpair failed and we were unable to recover it. 00:22:25.299 [2024-05-15 02:39:12.539466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.299 [2024-05-15 02:39:12.539684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.539711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.539928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.540148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.540176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.540396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.540588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.540618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.540859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.541103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.541132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.541360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.541622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.541671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.541912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.542174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.542205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.542418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.542630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.542658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.542847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.543051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.543080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.543287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.543492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.543519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.543696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.543939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.543964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.544159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.544333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.544363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.544569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.544728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.544753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.544914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.545138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.545166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.545376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.545544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.545569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.545758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.545980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.546006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.546222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.546450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.546477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.546698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.546884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.546911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.547141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.547520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.547583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.547831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.548075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.548103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.548290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.548504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.548532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.548747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.548952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.548981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.549190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.549385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.549413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.549606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.549845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.549873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.550050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.550265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.550291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.550483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.550692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.550717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.550957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.551157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.551182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.551417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.551581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.551606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.551797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.552015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.552041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.552205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.552364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.552389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.552608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.552806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.552833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.300 qpair failed and we were unable to recover it. 00:22:25.300 [2024-05-15 02:39:12.553027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.300 [2024-05-15 02:39:12.553266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.553293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.553534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.553698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.553724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.553915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.554145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.554171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.554412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.554673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.554697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.554970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.555150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.555175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.555347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.555565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.555589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.555802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.556023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.556049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.556244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.556478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.556506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.556722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.556942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.556981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.557233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.557596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.557656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.557893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.558089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.558118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.558336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.558678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.558737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.558959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.559199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.559227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.559466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.559685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.559713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.559925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.560118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.560145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.560333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.560548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.560573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.560737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.560990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.561019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.561241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.561451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.561475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.561719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.561907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.561942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.562149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.562311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.562336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.562522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.562734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.562762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.563007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.563225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.563257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.563470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.563683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.563711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.563923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.564155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.564180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.564372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.564544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.564572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.564826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.565094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.565120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.565341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.565514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.565542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.565772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.566015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.566044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.566223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.566425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.566450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.566660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.566901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.566927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.301 qpair failed and we were unable to recover it. 00:22:25.301 [2024-05-15 02:39:12.567133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.301 [2024-05-15 02:39:12.567439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.567507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.567746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.567920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.567956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.568175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.568378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.568406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.568620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.568885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.568943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.569194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.569442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.569469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.569655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.569895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.569923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.570132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.570358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.570383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.570572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.570766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.570791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.571006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.571184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.571212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.571426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.571681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.571730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.571938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.572132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.572157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.572357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.572594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.572619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.572844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.573057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.573083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.573314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.573522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.573550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.573753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.573941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.573977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.574196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.574412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.574443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.574689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.574849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.574874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.575100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.575291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.575319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.575554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.575740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.575768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.576014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.576205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.576232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.576456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.576666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.576693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.576868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.577088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.577116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.577338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.577500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.577525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.302 qpair failed and we were unable to recover it. 00:22:25.302 [2024-05-15 02:39:12.577769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.577980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.302 [2024-05-15 02:39:12.578008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.303 qpair failed and we were unable to recover it. 00:22:25.303 [2024-05-15 02:39:12.578253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.578542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.578597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.303 qpair failed and we were unable to recover it. 00:22:25.303 [2024-05-15 02:39:12.578835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.579031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.579058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.303 qpair failed and we were unable to recover it. 00:22:25.303 [2024-05-15 02:39:12.579299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.579663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.579718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.303 qpair failed and we were unable to recover it. 00:22:25.303 [2024-05-15 02:39:12.579947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.580164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.303 [2024-05-15 02:39:12.580192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.303 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.580404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2403635 Killed "${NVMF_APP[@]}" "$@" 00:22:25.304 [2024-05-15 02:39:12.580655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.580683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.580926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:25.304 [2024-05-15 02:39:12.581135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.581160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.304 [2024-05-15 02:39:12.581385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:25.304 [2024-05-15 02:39:12.581599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.581627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.304 [2024-05-15 02:39:12.581870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.582090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.582116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.582314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.582656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.582708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.582920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.583175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.583203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.583444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.583642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.583667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.583901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.584096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.584121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.584306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.584488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.584514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.584704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.584922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.584960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.585179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2404546 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2404546 00:22:25.304 [2024-05-15 02:39:12.585425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.585453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2404546 ']' 00:22:25.304 [2024-05-15 02:39:12.585664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:25.304 [2024-05-15 02:39:12.585883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.585912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:25.304 [2024-05-15 02:39:12.586137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 02:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.304 [2024-05-15 02:39:12.586353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.586381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.586589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.586834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.586880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.587116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.587339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.587368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.587614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.587836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.587881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.588124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.588316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.588344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.588562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.588808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.588854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.589073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.589267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.589296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.589509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.589725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.589758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.589956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.590151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.590176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.590372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.590529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.590555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.590753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.591024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.591053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.591246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.591544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.591590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.304 [2024-05-15 02:39:12.591832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.592000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.304 [2024-05-15 02:39:12.592027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.304 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.592216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.592408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.592433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.592615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.592823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.592851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.593066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.593293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.593338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.593555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.593744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.593770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.593965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.594183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.594211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.594398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.594603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.594631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.594915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.595112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.595140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.595319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.595529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.595558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.595745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.595942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.595968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.596137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.596352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.596379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.596591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.596804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.596833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.597091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.597310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.597353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.597594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.597812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.597840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.598075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.598291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.598319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.598534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.598695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.598719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.598953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.599194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.599222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.599440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.599654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.599681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.599886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.600093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.600119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.600338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.600549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.600575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.600764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.600944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.600973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.601211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.601433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.601461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.601673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.601889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.601916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.602169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.602383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.602411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.602600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.602761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.602806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.603024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.603205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.603234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.603428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.603622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.603650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.603854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.604037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.604066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.604283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.604510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.604535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.604727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.604944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.604989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.605208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.605422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.605451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.305 qpair failed and we were unable to recover it. 00:22:25.305 [2024-05-15 02:39:12.605685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.605859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.305 [2024-05-15 02:39:12.605885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.606116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.606299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.606326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.606518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.606711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.606736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.606928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.607152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.607176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.607402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.607604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.607630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.607836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.608037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.608063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.608270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.608473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.608499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.608698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.608873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.608900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.609160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.609373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.609399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.609614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.609814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.609839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.610006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.610220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.610245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.610456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.610705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.610731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.610944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.611108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.611135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.611341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.611531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.611557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.611793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.611962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.611988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.612199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.612427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.612458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.612692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.612936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.612962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.613159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.613330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.613355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.613524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.613721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.613746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.613946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.614153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.614178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.614384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.614581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.614607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.614838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.615039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.615066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.615258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.615450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.615474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.615665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.615834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.615859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.616050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.616243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.616270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.616461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.616655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.616685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.616852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.617044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.617071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.617263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.617426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.617451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.617621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.617784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.617809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.617978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.618145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.618171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.618383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.618548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.618573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.618770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.618942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.618968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.619138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.619334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.306 [2024-05-15 02:39:12.619360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.306 qpair failed and we were unable to recover it. 00:22:25.306 [2024-05-15 02:39:12.619529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.619728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.619753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.619971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.620130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.620157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.620340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.620533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.620559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.620756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.620940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.620966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.621151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.621317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.621342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.621530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.621714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.621739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.621921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.622088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.622113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.622268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.622430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.622455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.622670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.622834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.622859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.623054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.623246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.623271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.623479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.623676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.623700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.623895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.624101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.624138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.624375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.624615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.624646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.624852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.625045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.625072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.625246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.625411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.625436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.625635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.625794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.625818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.626021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.626210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.626241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.626433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.626622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.626648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.626841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.627058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.627084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.627262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.627484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.627511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.627702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.628502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.628533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.628741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.629222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.629252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.631277] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:22:25.307 [2024-05-15 02:39:12.631364] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.307 [2024-05-15 02:39:12.632036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.632229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.632258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.632436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.632617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.632645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.632872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.633098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.633138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.633379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.633545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.633571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.633743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.633911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.633957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.634122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.634310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.634337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.634537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.634733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.634760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.634926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.635134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.307 [2024-05-15 02:39:12.635159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.307 qpair failed and we were unable to recover it. 00:22:25.307 [2024-05-15 02:39:12.635334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.635548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.635573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.635747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.635906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.635943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.636139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.636340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.636374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.636540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.636735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.636761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.636953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.637122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.637149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.637335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.637527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.637554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.637748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.637941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.637967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.638156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.638343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.638368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.638556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.638731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.638756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.638954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.639129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.639155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.639348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.639566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.639592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.639820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.639980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.640006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.640198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.640366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.640390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.640564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.640804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.640831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.641029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.641229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.641254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.641451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.641669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.641694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.641857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.642063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.642089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.642245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.642448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.642473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.642688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.642886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.642911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.643135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.643306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.643331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.643550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.643721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.643746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.643986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.644205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.644229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.644426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.644641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.644666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.644892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.645149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.645175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.645363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.645553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.645578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.645789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.645985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.646013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.646203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.646432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.646457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.646642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.646830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.646855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.647050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.647245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.647271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.647442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.647659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.647684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.647850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.648024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.308 [2024-05-15 02:39:12.648050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.308 qpair failed and we were unable to recover it. 00:22:25.308 [2024-05-15 02:39:12.648221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.648448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.648473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.648660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.648856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.648882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.649058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.649255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.649281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.649476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.649641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.649665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.649860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.650059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.650085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.650250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.650465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.650490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.650690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.650908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.650978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.651149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.651374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.651400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.651599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.651801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.651826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.652071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.652239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.652264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.652483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.652672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.652697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.652862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.653062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.653088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.653259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.653480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.653506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.653704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.653867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.653891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.654137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.654301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.654326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.654516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.654699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.654724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.654885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.655087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.655113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.655286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.655465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.655490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.655686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.655872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.655897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.656094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.656321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.656346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.656535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.656729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.656754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.656952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.657198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.657225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.657423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.657617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.657646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.657840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.658040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.658066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.658239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.658460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.658485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.658682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.658901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.658948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.659151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.659334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.659359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.659558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.659765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.659791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.659993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.660187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.660213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.660438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.660657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.660682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.660873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.661101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.661127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.661329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.661547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.661572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.661760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.661954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.661984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.662157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.662357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.662387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.309 [2024-05-15 02:39:12.662589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.662789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.309 [2024-05-15 02:39:12.662813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.309 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.663004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.663197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.663222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.663424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.663637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.663662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.663855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.664034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.664061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.664235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.664437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.664462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.664658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.664850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.664875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.665058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.665276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.665301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.665500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.665689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.665714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.665906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.666079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.666104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.666328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.666530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.666555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.666725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.666958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.666985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.667178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.667356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.667382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.667578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.667768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.667793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.667978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.668147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.668172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.668367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.668573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.668598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.668792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.668963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.668988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.669180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.669404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.669430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.669598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.669786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.669811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.670004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.670167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.670192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.670421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.670590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.670615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.670808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.670975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.671001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.671169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.671342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.671368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.671571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.671766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.671792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.672023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.672213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.672242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.672411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.672619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.672644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.672842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.673064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.673090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.673308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.673501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.673526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.673715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.673904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.673935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.674133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.674295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.674319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.674515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.310 [2024-05-15 02:39:12.674712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.674737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.674904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.675138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.675164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.675360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.675559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.675584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.675809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.675972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.675998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.676190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.676384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.676411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.676604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.676793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.676818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.310 [2024-05-15 02:39:12.677017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.677234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.310 [2024-05-15 02:39:12.677259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.310 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.677461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.677627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.677653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.677841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.678006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.678032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.678188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.678383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.678409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.678604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.678805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.678831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.679031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.679231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.679256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.679458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.679675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.679700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.679891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.680085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.680111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.680285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.680488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.680513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.680709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.681042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.681068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.681259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.681453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.681477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.681699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.681921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.681953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.682173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.682380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.682404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.682572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.682767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.682792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.682992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.683198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.683228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.683419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.683613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.683642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.683804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.684011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.684036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.684233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.684394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.684421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.684629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.684824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.684849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.685043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.685209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.685251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.685474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.685686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.685712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.685884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.686095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.686121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.686290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.686478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.686506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.686676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.686840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.686866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.687043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.687263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.687293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.687481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.687706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.687733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.687968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.688165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.688191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.688398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.688614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.688640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.688849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.689045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.689071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.311 [2024-05-15 02:39:12.689244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.689473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.311 [2024-05-15 02:39:12.689502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.311 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.689701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.689863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.689889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.690087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.690283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.690308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.690466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.690665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.690693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.690859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.691086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.691115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.691316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.691507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.691532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.691758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.691958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.691985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.692146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.692340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.692366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.692525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.692741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.692768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.693000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.693173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.693203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.693405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.693602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.693628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.693787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.694004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.694030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.694224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.694418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.694443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.694609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.694836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.694861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.695068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.695236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.695262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.695480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.695679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.695704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.695868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.699136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.699170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.699409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.699605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.699632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.699806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.700002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.700028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.700226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.700453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.700479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.700679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.700895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.700921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.701122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.701321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.701345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.701565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.701777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.701802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.702000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.702165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.702189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.584 qpair failed and we were unable to recover it. 00:22:25.584 [2024-05-15 02:39:12.702386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.584 [2024-05-15 02:39:12.702609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.702634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.702855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.703076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.703102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.703291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.703518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.703544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.703739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.703942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.703968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.704165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.704371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.704396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.704597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.704812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.704837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.705038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.705232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.705272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.705488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.705686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.705712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.705886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.706097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.706123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.706309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.706529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.706554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.706774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.706961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.707001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.707200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.707425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.707449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.707643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.707807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.707833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.708038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.708206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.708253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.708428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.708654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.708678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.708898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.709071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.709097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.709266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.709450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.709475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.709668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.709891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.709924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.710127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.710297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.710324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.710547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.710740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.710765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.710966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.711124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.711149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.711371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.711560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.711585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.711780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.711979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.712009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.712170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.712362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.712387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.712576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.712734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.712759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.712922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.713098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.713124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.713292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.713518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.713543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.713733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.713951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-05-15 02:39:12.713977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.585 qpair failed and we were unable to recover it. 00:22:25.585 [2024-05-15 02:39:12.714137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.714334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.714358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.714526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.714744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.714769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.714926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.715155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.715182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.715412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.715602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.715627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.715795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.715998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.716025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.716251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.716440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.716465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.716651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.716843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.716869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.717047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.717210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.717244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.717409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.717599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.717625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.717825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.717993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.718020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.718210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.718203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.586 [2024-05-15 02:39:12.718377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.718403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.718599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.718785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.718810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.718981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.719169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.719194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.719359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.719549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.719573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.719792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.719985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.720011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.720213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.720412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.720437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.720652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.720841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.720866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.721069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.721271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.721296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.721492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.721664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.721690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.722020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.722194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.722219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.722401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.722588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.722613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.722802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.722965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.722991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.723155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.723324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.723349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.723544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.723731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.723755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.586 [2024-05-15 02:39:12.723911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.724121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.586 [2024-05-15 02:39:12.724147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.586 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.724360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.724577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.724602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.724774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.724993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.725019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.725181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.725377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.725402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.725579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.725803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.725828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.726026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.726221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.726250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.726417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.726624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.726649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.726869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.727068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.727094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.727300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.727503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.727529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.727729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.727947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.727973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.728194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.728397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.728423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.728725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.728886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.728910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.729110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.729304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.729329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.729527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.729689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.729714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.729902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.730103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.730129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.730324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.730514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.730539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.730715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.730879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.730906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.731110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.731275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.731306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.731480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.731677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.731702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.731892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.732100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.732126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.732332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.732517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.732542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.732733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.732924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.732958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.733161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.733364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.733389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.733560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.733760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.733785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.733984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.734143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.734168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.734377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.734579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.734605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.734804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.734978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.735004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.587 qpair failed and we were unable to recover it. 00:22:25.587 [2024-05-15 02:39:12.735195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.735364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.587 [2024-05-15 02:39:12.735389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.735587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.735779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.735805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.735978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.736167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.736193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.736387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.736580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.736605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.736790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.736988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.737018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.737212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.737382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.737407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.737565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.737738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.737763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.737958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.738128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.738155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.738347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.738519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.738544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.738733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.738923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.738955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.739115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.739277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.739302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.739476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.739667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.739692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.739863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.740040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.740066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.740262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.740451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.740475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.740650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.740808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.740834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.741037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.741228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.741253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.741418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.741615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.741642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.741842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.742033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.742058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.742229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.742419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.742445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.742615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.742833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.742858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.743061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.743293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.743319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.743517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.743678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.743705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.743924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.744096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.744121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.744320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.744484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.744509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.744727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.744918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.744951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.588 [2024-05-15 02:39:12.745119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.745277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.588 [2024-05-15 02:39:12.745303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.588 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.745479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.745675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.745700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.745899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.746095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.746120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.746281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.746474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.746498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.746688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.746879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.746904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.747151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.747353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.747378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.747569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.747754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.747778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.747962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.748128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.748153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.748323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.748517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.748542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.748706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.748897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.748922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.749132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.749298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.749324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.749519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.749709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.749734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.749949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.750133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.750157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.750332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.750526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.750551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.750751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.750967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.750993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.751184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.751352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.751376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.751566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.751784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.751809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.752026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.752188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.752212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.752380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.752544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.752569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.752758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.752952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.752979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.753175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.753372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.753397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.753583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.753744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.589 [2024-05-15 02:39:12.753768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.589 qpair failed and we were unable to recover it. 00:22:25.589 [2024-05-15 02:39:12.753971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.754165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.754190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.754352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.754545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.754569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.754732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.754896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.754923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.755131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.755296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.755320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.755507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.755673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.755698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.755899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.756076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.756102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.756287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.756478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.756503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.756665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.756869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.756893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.757119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.757278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.757307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.757505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.757672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.757698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.757922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.758113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.758138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.758333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.758516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.758540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.758730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.758918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.758956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.759118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.759291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.759316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.759500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.759665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.759689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.759857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.760063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.760089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.760258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.760450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.760475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.760635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.760823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.760848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.761067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.761273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.761307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.761479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.761700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.761726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.761897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.762060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.762086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.762250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.762440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.762464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.762654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.762878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.762903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.763145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.763308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.763333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.763560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.763774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.763799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.764017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.764210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.764235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.764423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.764615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.590 [2024-05-15 02:39:12.764640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.590 qpair failed and we were unable to recover it. 00:22:25.590 [2024-05-15 02:39:12.764828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.765020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.765045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.765200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.765418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.765442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.765651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.765851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.765875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.766059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.766227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.766251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.766472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.766661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.766685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.766860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.767025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.767052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.767243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.767463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.767488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.767649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.767871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.767896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.768068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.768259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.768284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.768439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.768628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.768653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.768838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.769060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.769086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.769290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.769481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.769508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.769707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.769896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.769921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.770120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.770322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.770348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.770564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.770729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.770754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.770955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.771125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.771151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.771340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.771507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.771533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.771736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.771955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.771980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.772155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.772350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.772375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.772566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.772757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.772783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.772980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.773183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.773208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.773413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.773610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.773635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.773802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.773978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.774004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.774181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.774344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.774369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.591 qpair failed and we were unable to recover it. 00:22:25.591 [2024-05-15 02:39:12.774564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.591 [2024-05-15 02:39:12.774750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.774774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.774964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.775152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.775178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.775374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.775532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.775557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.775752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.775939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.775965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.776156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.776485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.776509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.776699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.776919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.776952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.777148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.777346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.777370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.777573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.777769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.777794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.777972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.778174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.778198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.778359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.778578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.778603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.778824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.779015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.779040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.779258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.779443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.779467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.779671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.779864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.779888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.780054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.780224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.780248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.780446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.780660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.780685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.780882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.781078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.781104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.781414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.781606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.781631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.781824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.782058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.782083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.782273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.782467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.782499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.782718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.782888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.782913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.783153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.783346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.783371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.783563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.783784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.783809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.783997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.784176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.784202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.784394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.784597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.784625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.784818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.784985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.785011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.785180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.785345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.785371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.785537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.785850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.785875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.786082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.786250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.786275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.592 qpair failed and we were unable to recover it. 00:22:25.592 [2024-05-15 02:39:12.786446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.786656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.592 [2024-05-15 02:39:12.786681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.786885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.787049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.787075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.787265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.787450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.787474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.787639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.787938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.787964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.788160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.788347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.788373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.788534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.788700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.788725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.788914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.789114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.789139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.789336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.789504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.789528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.789724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.789920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.789952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.790120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.790346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.790371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.790557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.790742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.790767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.790953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.791142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.791166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.791335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.791504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.791532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.791699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.791859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.791883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.792083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.792277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.792302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.792520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.792679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.792703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.792904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.793129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.793155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.793350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.793546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.793570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.793765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.793992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.794017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.794190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.794380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.794405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.794598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.794790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.794815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.795019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.795185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.795209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.795370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.795567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.795591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.795762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.795984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.796011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.796167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.796386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.796410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.796570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.796742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.796767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.796966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.797158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.797182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.797378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.797552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.797576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.593 qpair failed and we were unable to recover it. 00:22:25.593 [2024-05-15 02:39:12.797794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.797987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.593 [2024-05-15 02:39:12.798013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.798203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.798372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.798396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.798614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.798808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.798833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.799052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.799225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.799250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.799412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.799600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.799625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.799828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.800028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.800054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.800248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.800443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.800470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.800664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.800857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.800881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.801048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.801241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.801265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.801454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.801643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.801668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.801827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.801996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.802022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.802208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.802368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.802393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.802591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.802779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.802804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.802999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.803196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.803225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.803423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.803615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.803640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.803866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.804062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.804088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.804258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.804448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.804474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.804640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.804821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.804845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.805045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.805269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.805293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.805488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.805734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.805759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.805958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.806154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.806179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.806400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.806560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.806585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.806741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.806950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.806976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.807142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.807315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.807339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.807513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.807675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.807699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.807868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.808057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.808082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.594 [2024-05-15 02:39:12.808289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.808505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.594 [2024-05-15 02:39:12.808530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.594 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.808721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.808906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.808937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.809107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.809327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.809352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.809540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.809732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.809757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.809921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.810125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.810150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.810343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.810537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.810564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.810786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.810978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.811004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.811192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.811381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.811406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.811608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.811800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.811825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.812026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.812216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.812241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.812408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.812677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.812702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.812901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.813071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.813096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.813281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.813470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.813495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.813687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.813848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.813874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.814046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.814241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.814266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.814457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.814675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.814700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.814918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.815121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.815147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.815336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.815499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.815525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.815747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.815905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.815939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.816162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.816324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.816349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.816569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.816726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.816751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.816913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.817082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.817109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.817299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.817497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.817521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.817715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.817907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.817939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.818130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.818318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.818343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.818561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.818746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.818771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.818954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.819125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.819151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.819421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.819686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.819712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.819925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.820100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.820128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.820289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.820487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.820512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.595 qpair failed and we were unable to recover it. 00:22:25.595 [2024-05-15 02:39:12.820671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.595 [2024-05-15 02:39:12.820831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.820855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.821048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.821213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.821238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.821390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.821581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.821607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.821797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.821997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.822022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.822194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.822413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.822438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.822604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.822797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.822821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.823005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.823199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.823225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.823414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.823599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.823623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.823819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.824013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.824047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.824208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.824396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.824420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.824619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.824805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.824829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.825044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.825240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.825265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.825480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.825665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.825690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.825871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.826036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.826061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.826231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.826395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.826419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.826578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.826767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.826791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.827014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.827205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.827230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.827427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.827622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.827648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.827824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.828016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.828042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.828235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.828425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.828449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.828645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.828836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.828860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.829030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.829224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.829248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.829409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.829571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.829596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.829791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.829952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.829977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.830142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.830310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.830337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.830535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.830729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.830753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.830915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.831103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.831128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.831322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.831485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.831509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.831711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.831899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.831924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.832157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.832389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.596 [2024-05-15 02:39:12.832413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.596 qpair failed and we were unable to recover it. 00:22:25.596 [2024-05-15 02:39:12.832638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.832796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.832820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.832992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.833188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.833212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.833402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.833562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.833586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.833747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.833916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.833949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.834113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.834271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.834295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.834453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.834616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.834643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.834809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.834974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.835000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.835018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.597 [2024-05-15 02:39:12.835051] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.597 [2024-05-15 02:39:12.835065] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.597 [2024-05-15 02:39:12.835077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.597 [2024-05-15 02:39:12.835087] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.597 [2024-05-15 02:39:12.835195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.835176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:25.597 [2024-05-15 02:39:12.835205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:25.597 [2024-05-15 02:39:12.835252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:25.597 [2024-05-15 02:39:12.835254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.597 [2024-05-15 02:39:12.835361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.835387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.835559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.835747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.835772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.835951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.836148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.836173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.836366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.836530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.836555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.836719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.836899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.836923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.837123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.837318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.837343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.837506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.837664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.837690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.837874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.838047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.838073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.838238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.838431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.838456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.838627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.838815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.838839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.839037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.839193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.839217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.839374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.839556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.839581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.839761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.839934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.839959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.840151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.840328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.840354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.840526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.840734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.840758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.841102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.841127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.841287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.841495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.841519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.841737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.841912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.841945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.842103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.842300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.842324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.842518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.842705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.597 [2024-05-15 02:39:12.842730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.597 qpair failed and we were unable to recover it. 00:22:25.597 [2024-05-15 02:39:12.842941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.843108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.843133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.843296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.843466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.843494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.843667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.843839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.843863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.844052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.844243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.844269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.844434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.844586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.844609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.844820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.844989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.845014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.845231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.845446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.845471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.845672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.845836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.845861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.846033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.846221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.846248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.846412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.846671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.846696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.846886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.847070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.847095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.847267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.847456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.847481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.847644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.847804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.847830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.848028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.848186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.848211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.848415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.848568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.848593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.848759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.849057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.849083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.849243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.849435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.849461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.849648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.849830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.849854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.850056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.850256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.850281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.850441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.850709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.850733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.850964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.851134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.851162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.851339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.851529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.851556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.851724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.851920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.851954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.852113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.852283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.852306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.852516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.852679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.852704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.598 qpair failed and we were unable to recover it. 00:22:25.598 [2024-05-15 02:39:12.852871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.853187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.598 [2024-05-15 02:39:12.853223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.853383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.853605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.853631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.853813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.854009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.854035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.854235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.854395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.854419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.854590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.854776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.854801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.855019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.855215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.855245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.855438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.855605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.855631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.855838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.856027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.856053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.856244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.856408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.856435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.856635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.856830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.856855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.857047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.857275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.857301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.857470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.857638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.857664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.857838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.858001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.858026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.858247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.858415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.858439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.858632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.858827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.858853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.859034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.859189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.859214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.859404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.859566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.859591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.859753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.859948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.859975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.860132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.860301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.860326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.860496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.860777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.860802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.861021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.861183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.861207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.861412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.861606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.861631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.861797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.861963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.861988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.862180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.862362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.862387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.862550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.862744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.862768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.862955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.863126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.863152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.863360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.863522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.863547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.863743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.863908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.863949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.864119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.864317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.864342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.864533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.864722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.864746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.599 qpair failed and we were unable to recover it. 00:22:25.599 [2024-05-15 02:39:12.864940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.599 [2024-05-15 02:39:12.865112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.865137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.865296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.865474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.865501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.865715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.865869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.865895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.866120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.866303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.866333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.866510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.866684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.866711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.866941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.867142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.867168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.867369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.867572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.867601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.867768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.867942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.867988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.868290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.868569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.868594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.868815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.869000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.869026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.869195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.869394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.869419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.869578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.869775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.869801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.870011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.870177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.870202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.870403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.870562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.870587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.870759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.870925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.870959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.871134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.871313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.871338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.871536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.871707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.871733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.871911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.872112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.872137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.872317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.872483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.872509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.872694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.872864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.872889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.873178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.873383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.873407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.873614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.873775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.873803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.873978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.874145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.874170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.874363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.874557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.874581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.874766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.875022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.875048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.875205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.875389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.875414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.875619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.875786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.875819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.876016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.876197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.876222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.876417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.876570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.876595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.600 [2024-05-15 02:39:12.876783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.876985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.600 [2024-05-15 02:39:12.877011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.600 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.877168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.877352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.877377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.877571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.877740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.877766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.877990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.878168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.878194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.878529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.878721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.878746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.878907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.879074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.879100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.879280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.879457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.879482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.879643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.879809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.879840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.880011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.880184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.880211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.880401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.880612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.880637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.880791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.880955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.880981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.881170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.881340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.881365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.881531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.881719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.881745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.881939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.882111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.882137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.882318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.882519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.882544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.882739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.882940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.882966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.883126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.883315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.883340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.883523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.883720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.883746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.883925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.884140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.884166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.884370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.884523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.884548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.884771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.884940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.884966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.885135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.885308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.885333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.885518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.885819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.885844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.886020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.886191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.886220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.886423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.886609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.886634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.886838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.887023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.887048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.887214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.887374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.887399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.887560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.887721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.887747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.887945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.888132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.888160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.888353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.888504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.888529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.601 [2024-05-15 02:39:12.888718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.888926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.601 [2024-05-15 02:39:12.888957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.601 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.889163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.889354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.889379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.889541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.889733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.889759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.889927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.890101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.890129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.890292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.890480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.890505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.890678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.890866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.890891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.891073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.891236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.891263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.891639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.891664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.891854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.892026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.892053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.892220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.892401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.892426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.892726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.892940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.892966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.893161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.893351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.893376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.893569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.893732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.893758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.893936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.894112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.894138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.894304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.894461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.894485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.894643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.894809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.894834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.894996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.895269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.895294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.895510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.895671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.895696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.895899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.896094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.896120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.896281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.896457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.896482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.896676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.896864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.896889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.897061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.897234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.897259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.897438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.897592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.897617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.897780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.897938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.897964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.898125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.898333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.898357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.898516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.898676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.898703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.898900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.899116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.899142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.602 [2024-05-15 02:39:12.899302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.899496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.602 [2024-05-15 02:39:12.899521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.602 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.899680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.899842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.899872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.900061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.900224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.900250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.900404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.900610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.900635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.900813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.901031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.901057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.901391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.901606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.901631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.901806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.902003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.902029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.902206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.902392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.902417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.902583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.902742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.902767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.902937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.903096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.903121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.903278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.903450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.903474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.903629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.903847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.903872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.904111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.904315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.904340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.904532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.904692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.904717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.904941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.905143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.905168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.905375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.905538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.905562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.905724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.905934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.905962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.906135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.906300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.906326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.906512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.906667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.906691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.906849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.907054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.907079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.907283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.907493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.907517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.907683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.907873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.907900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.908150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.908330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.908355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.908520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.908700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.908725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.908896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.909066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.909094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.909292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.909474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.909499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.909659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.909858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.909884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.910079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.910246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.910271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.603 [2024-05-15 02:39:12.910424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.910616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.603 [2024-05-15 02:39:12.910641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.603 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.910832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.910988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.911014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.911223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.911395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.911421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.911651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.911850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.911876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.912082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.912273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.912299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.912459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.912653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.912677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.912863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.913058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.913084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.913304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.913468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.913493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.913650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.913837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.913863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.914063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.914271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.914296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.914468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.914625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.914650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.914859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.915066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.915092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.915255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.915445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.915470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.915635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.915798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.915824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.916003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.916174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.916199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.916363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.916558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.916584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.916748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.916913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.916958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.917145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.917308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.917333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.917492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.917654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.917685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.917865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.918057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.918083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.918254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.918407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.918432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.918633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.918823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.918848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.919019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.919191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.919216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.919445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.919607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.919632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.919840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.920003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.920033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.920200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.920394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.920419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.920601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.920768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.920792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.920966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.921126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.921151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.921352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.921509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.921534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.921697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.921891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.921916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.604 [2024-05-15 02:39:12.922117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.922302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.604 [2024-05-15 02:39:12.922327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.604 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.922508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.922698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.922723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.922905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.923085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.923110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.923267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.923433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.923458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.923614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.923790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.923815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.924011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.924170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.924195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.924381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.924535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.924560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.924749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.924905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.924938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.925136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.925304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.925329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.925536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.925698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.925723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.925898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.926066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.926092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.926281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.926455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.926480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.926673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.926844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.926868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.927060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.927232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.927257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.927414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.927598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.927624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.927819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.927984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.928010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.928179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.928342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.928367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.928558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.928751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.928776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.928974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.929138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.929163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.929345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.929561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.929586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.929741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.929897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.929924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.930108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.930284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.930308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.930474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.930647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.930672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.930832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.931033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.931058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.931220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.931382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.931406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.931590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.931787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.931812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.931985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.932180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.932206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.932400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.932565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.932590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.932746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.932959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.932984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.933143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.933322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.933347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.605 qpair failed and we were unable to recover it. 00:22:25.605 [2024-05-15 02:39:12.933542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.933722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.605 [2024-05-15 02:39:12.933749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.933944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.934110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.934135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.934325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.934504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.934529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.934691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.934866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.934892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.935095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.935283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.935307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.935489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.935684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.935709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.935902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.936089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.936115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.936294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.936448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.936472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.936741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.936954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.936983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.937162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.937326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.937352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.937513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.937702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.937728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.937902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.938124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.938150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.938316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.938486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.938511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.938668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.938853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.938878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.939096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.939265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.939290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.939480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.939645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.939674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.939892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.940087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.940113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.940271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.940495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.940519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.940790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.940979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.941005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.941200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.941406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.941430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.941616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.941796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.941820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.942025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.942202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.942227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.942420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.942611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.942636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.942800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.942961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.942987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.943177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.943364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.943389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.943567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.943752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.943776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.944053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.944245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.944270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.944456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.944643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.944668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.944854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.945022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.945049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.945242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.945444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.945471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.606 qpair failed and we were unable to recover it. 00:22:25.606 [2024-05-15 02:39:12.945637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.606 [2024-05-15 02:39:12.945797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.945825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.945988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.946175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.946200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.946389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.946580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.946604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.946778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.946968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.946993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.947173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.947360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.947385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.947600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.947755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.947780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.947960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.948161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.948187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.948382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.948650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.948675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.948835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.949035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.949061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.949221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.949411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.949438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.949631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.949900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.949941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.950126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.950291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.950316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.950526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.950716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.950740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.950957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.951114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.951139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.951408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.951622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.951647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.951805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.951982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.952009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.952184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.952382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.952407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.952569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.952834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.952859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.953049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.953235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.953270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.953443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.953603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.953628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.953818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.954000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.954026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.954194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.954349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.954374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.954532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.954716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.954740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.954925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.955092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.955119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.955280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.955443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.955468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.607 [2024-05-15 02:39:12.955621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.955808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.607 [2024-05-15 02:39:12.955834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.607 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.956078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.956250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.956276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.956493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.956662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.956687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.956956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.957111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.957137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.957296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.957458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.957485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.957680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.957890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.957916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.958090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.958270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.958295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.958462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.958627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.958652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.958843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.959016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.959042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.959200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.959388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.959413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.959570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.959769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.959794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.959957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.960126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.960155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.960332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.960489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.960516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.960699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.960854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.960878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.961083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.961263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.961288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.961476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.961631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.961656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.961823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.962012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.962039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.962224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.962393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.962418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.962573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.962791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.962816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.962999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.963159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.963184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.963382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.963586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.963611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.963804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.963966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.963996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.964160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.964386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.964413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.964611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.964800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.964824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.965003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.965173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.965198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.965378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.965567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.965593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.965801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.965996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.966022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.966190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.966399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.966425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.966615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.966777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.966801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.966988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.967181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.967206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.608 [2024-05-15 02:39:12.967374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.967557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.608 [2024-05-15 02:39:12.967583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.608 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.967748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.967914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.967946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.968118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.968298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.968322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.968476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.968650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.968675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.968865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.969051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.969076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.969251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.969407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.969432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.969598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.969803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.969828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.969989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.970171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.970195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.970411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.970604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.970631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.970791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.970990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.971017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.971181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.971391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.971417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.971578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.971789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.971815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.971990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.972155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.972181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.972378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.972569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.972594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.972786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.972978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.973004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.973177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.973339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.973365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.973571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.973757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.973782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.973965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.974192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.974218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.974412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.974596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.974621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.974804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.974968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.974994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.975266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.975455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.975480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.975647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.975854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.975878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.976056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.976222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.976248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.976405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.976563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.976588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.976750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.976908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.976949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.977144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.977336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.977362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.977538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.977705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.977731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.977948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.978111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.978136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.978297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.978481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.978506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.978717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.978880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.609 [2024-05-15 02:39:12.978905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.609 qpair failed and we were unable to recover it. 00:22:25.609 [2024-05-15 02:39:12.979079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.979241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.979267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.979423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.979611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.979637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.979798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.980028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.980055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.980325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.980511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.980536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.980702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.980877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.980903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.981107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.981269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.981294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.981490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.981651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.981676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.981853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.982030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.982058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.982224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.982378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.982402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.982586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.982791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.982818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.983016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.983230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.983254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.983418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.983579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.983612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.983802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.983984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.610 [2024-05-15 02:39:12.984016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.610 qpair failed and we were unable to recover it. 00:22:25.610 [2024-05-15 02:39:12.984211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.984382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.984407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.884 qpair failed and we were unable to recover it. 00:22:25.884 [2024-05-15 02:39:12.984579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.984768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.984808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.884 qpair failed and we were unable to recover it. 00:22:25.884 [2024-05-15 02:39:12.984986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.985177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.985203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.884 qpair failed and we were unable to recover it. 00:22:25.884 [2024-05-15 02:39:12.985371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.884 [2024-05-15 02:39:12.985582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.985608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.985797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.985978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.986005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.986200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.986397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.986423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.986631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.986825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.986850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.987026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.987237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.987264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.987438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.987609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.987637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.987835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.988024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.988050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.988227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.988381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.988406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.988578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.988767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.988792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.988951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.989121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.989146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.989352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.989542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.989567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.989760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.989960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.989986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.990198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.990410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.990435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.990605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.990795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.990819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.991051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.991213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.991244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.991462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.991654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.991678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.991875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.992072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.992098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.992258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.992455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.992480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.992689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.992841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.992866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.993080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.993237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.993262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.993454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.993615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.993640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.993828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.994022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.994050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.994219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.994371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.994396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.994554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.994747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.994772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.994963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.995156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.995181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.995340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.995552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.995576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.995758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.995996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.996023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.996197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.996375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.996399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.996564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.996742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.996767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.996940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.997123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.885 [2024-05-15 02:39:12.997147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.885 qpair failed and we were unable to recover it. 00:22:25.885 [2024-05-15 02:39:12.997347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.997532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.997557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:12.997748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.997939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.997965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:12.998143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.998348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.998373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:12.998566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.998728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.998752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:12.998953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.999138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.999163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:12.999358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.999521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.999548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:12.999736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.999901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:12.999926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.000106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.000270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.000297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.000468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.000638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.000665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.000826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.001018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.001044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.001227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.001395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.001420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.001588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.001775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.001800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.001964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.002127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.002153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.002321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.002485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.002509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.002680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.002844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.002868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.003060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.003270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.003295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.003485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.003665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.003690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.003886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.004074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.004104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.004269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.004427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.004452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.004635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.004797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.004822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.005022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.005187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.005214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.005378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.005562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.005587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.005775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.005938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.005966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.006136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.006315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.006340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.006525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.006708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.006733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.006916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.007084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.007109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.007265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.007446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.007471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.007666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.007850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.007875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.008072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.008236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.008262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.886 [2024-05-15 02:39:13.008425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.008615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.886 [2024-05-15 02:39:13.008640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.886 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.008799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.008989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.009015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.009187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.009382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.009408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.009566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.009754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.009780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.009957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.010176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.010202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.010381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.010576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.010601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.010760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.010964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.010990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.011157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.011350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.011375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.011568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.011746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.011772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.011970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.012131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.012156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.012322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.012514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.012539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.012907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.012951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.013122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.013289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.013314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.013482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.013687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.013712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.013902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.014070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.014099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.014264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.014431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.014459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.014634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.014821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.014846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.015019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.015208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.015233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.015422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.015611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.015636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.015824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.016028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.016053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.016246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.016462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.016500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.016706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.016900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.016926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.017104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.017262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.017286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.017447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.017635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.017660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.017851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.018028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.018054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.018248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.018441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.018466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.018653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.018832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.018858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.019036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.019206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.019233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.019404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.019562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.019588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.019755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.019922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.019953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.887 [2024-05-15 02:39:13.020121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.020283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.887 [2024-05-15 02:39:13.020308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.887 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.020505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.020673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.020707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.020903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.021069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.021094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.021252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.021436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.021461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.021622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.021826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.021851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.022064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.022255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.022280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.022470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.022638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.022664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.022865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.023028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.023055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.023207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.023395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.023421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.023580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.023761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.023792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.023986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.024152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.024177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.024379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.024544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.024569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.024782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.024976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.025002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.025175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.025355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.025381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.025546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.025740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.025765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.025938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.026131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.026157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.026341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.026537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.026563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.026761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.026953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.026980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.027176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.027339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.027365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.027537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.027734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.027764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.027969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.028165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.028190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.028365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.028529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.028554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.028747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.028941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.028967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.029128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.029341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.029366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.029563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.029746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.029771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.029938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.030125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.030151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.030358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.030542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.030570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.030767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.030958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.030984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.031160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.031363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.031388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.031579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.031769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.031794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.888 qpair failed and we were unable to recover it. 00:22:25.888 [2024-05-15 02:39:13.032027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.888 [2024-05-15 02:39:13.032191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.032216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.032414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.032572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.032598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.032759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.032989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.033015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.033203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.033425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.033450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.033621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.033806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.033831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.034002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.034165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.034190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.034367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.034558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.034593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.034798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.034963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.034990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.035185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.035356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.035381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.035570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.035725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.035750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.035971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.036159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.036184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.036376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.036541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.036566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.036732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.036919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.036963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.037118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.037306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.037332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.037521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.037691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.037719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.037890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.038062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.038087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.038249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.038446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.038471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.038642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.038828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.038854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.039048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.039209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.039246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.039415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.039572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.039597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.039756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.039917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.039949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.040140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.040366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.040391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.040574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.040766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.040791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.040974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.041135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.041161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.889 qpair failed and we were unable to recover it. 00:22:25.889 [2024-05-15 02:39:13.041336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.889 [2024-05-15 02:39:13.041523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.041548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.041727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.041918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.041958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.042116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.042306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.042337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.042500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.042719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.042744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.042900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.043076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.043103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.043292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.043479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.043507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.043682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.043848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.043874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.044056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.044227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.044262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.044432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.044625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.044658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.044832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.045010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.045036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.045208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.045369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.045394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.045592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.045757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.045783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.045980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.046170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.046195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.046369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.046521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.046546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.046735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.046941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.046967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.047128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.047288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.047313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.047472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.047663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.047693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.047864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.048131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.048158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.048362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.048533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.048558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.048747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.048943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.048971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.049134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.049290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.049315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.049517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.049715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.049740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.049914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.050119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.050145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.050302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.050470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.050495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.050687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.050876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.050901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.051116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.051297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.051327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.051506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.051699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.051725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.051921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.052106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.052134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.052328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.052491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.052516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.052706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.052864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.052890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.890 qpair failed and we were unable to recover it. 00:22:25.890 [2024-05-15 02:39:13.053080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.890 [2024-05-15 02:39:13.053246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.053271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.053473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.053663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.053689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.053887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.054088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.054114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.054306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.054501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.054526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.054750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.054948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.054974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.055165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.055357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.055384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.055577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.055741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.055767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.055944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.056138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.056164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.056331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.056495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.056520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.056678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.056876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.056901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.057069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.057243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.057269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.057459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.057655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.057680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.057873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.058037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.058062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.058228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.058419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.058444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.058611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.058772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.058798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.058983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.059170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.059195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.059366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.059536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.059562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.059725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.059897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.059923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.060088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.060307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.060334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.060495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.060654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.060679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.060858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.061016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.061043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.061242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.061414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.061438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.061593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.061787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.061813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.061998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.062186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.062220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.062385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.062554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.062581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.062769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.062967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.062994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.063189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.063360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.063386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.063617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.063774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.063802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.063970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.064140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.064166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.891 qpair failed and we were unable to recover it. 00:22:25.891 [2024-05-15 02:39:13.064324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.064493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.891 [2024-05-15 02:39:13.064518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.064698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.064879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.064905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.065106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.065299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.065324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.065509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.065702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.065730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.065935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.066125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.066151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.066327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.066494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.066520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.066743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.066942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.066970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.067133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.067292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.067323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.067518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.067731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.067757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.067971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.068160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.068185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.068353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.068549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.068576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.068737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.068948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.068975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.069142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.069340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.069368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.069532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.069701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.069726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.069917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.070099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.070125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.070340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.070529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.070556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.070753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.070948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.070974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.071137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.071305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.071332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.071505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.071667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.071693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.071859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.072050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.072077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.072238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.072431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.072457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.072614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.072803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.072828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.073040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.073214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.073240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.073414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.073577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.073605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.073771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.073946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.073972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.074133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.074299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.074324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.074511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.074699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.074724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.074944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.075131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.075159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.075361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.075557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.075593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.075752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.075905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.075941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.892 qpair failed and we were unable to recover it. 00:22:25.892 [2024-05-15 02:39:13.076108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.892 [2024-05-15 02:39:13.076278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.076306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.076506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.076693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.076719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.076905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.077087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.077114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.077308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.077501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.077526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.077707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.077868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.077894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.078098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.078296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.078321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.078504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.078683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.078708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.078896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.079100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.079126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.079292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.079451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.079477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.079680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.079835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.079860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.080056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.080222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.080248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.080440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.080600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.080626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.080823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.081093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.081120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.081287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.081444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.081470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.081666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.081884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.081910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.082072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.082269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.082294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.082457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.082639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.082664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.082826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.083012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.083039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.083205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.083376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.083403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.083564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.083750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.083775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.083964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.084123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.084149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.084301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.084494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.084521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.084706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.084917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.084947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.085111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.085330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.085356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.085541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.085725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.085750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.893 qpair failed and we were unable to recover it. 00:22:25.893 [2024-05-15 02:39:13.085951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.893 [2024-05-15 02:39:13.086173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.086198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.086384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.086588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.086613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.086783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.087048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.087075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.087263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.087444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.087469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.087690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.087853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.087878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.088047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.088199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.088224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.088412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.088597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.088625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.088789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.088979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.089005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.089165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.089325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.089351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.089511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.089689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.089714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.089935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.090101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.090126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.090310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.090508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.090533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.090720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.090892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.090917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.091126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.091315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.091340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.091554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.091724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.091749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.091909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.092108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.092134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.092296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.092461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.092488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.092660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.092836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.092861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.093073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.093232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.093261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.093468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.093631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.093658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.093853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.094078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.094104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.094269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.094463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.094488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.094674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.094860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.094887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.095087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.095279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.095309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.095466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.095630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.095655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.095830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.095998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.096024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.096187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.096354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.096380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.096555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.096714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.096741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.096910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.097098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.097124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.894 [2024-05-15 02:39:13.097326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.097495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.894 [2024-05-15 02:39:13.097521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.894 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.097685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.097873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.097899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.098073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.098266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.098291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.098455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.098631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.098656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.098814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.098991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.099031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.099218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.099388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.099430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.099594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.099744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.099770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.099937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.100108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.100134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.100293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.100517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.100543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.100733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.100891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.100917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.101117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.101318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.101343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.101505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.101718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.101753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.101922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.102128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.102154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.102326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.102524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.102549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.102702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.102860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.102890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.103096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.103289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.103314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.103475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.103661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.103686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.103867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.104066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.104093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.104261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.104425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.104451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.104610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.104786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.104812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.104981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.105161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.105187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.105395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.105604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.105630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.105794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.105960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.105986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.106165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.106357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.106383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.106544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.106740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.106772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.106952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.107108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.107133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.107322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.107499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.107525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.107712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.107891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.107919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.108095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.108254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.108280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.108484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.108675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.108700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.895 qpair failed and we were unable to recover it. 00:22:25.895 [2024-05-15 02:39:13.108887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.109054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.895 [2024-05-15 02:39:13.109079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.109245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.109426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.109450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.109616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.109803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.109830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.109999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.110166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.110192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.110389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.110577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.110604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.110787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.110951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.110978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.111139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.111312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.111338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.111529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.111723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.111749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.111968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.112166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.112193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.112364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.112567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.112592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.112777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.112958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.112984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.113144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.113334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.113361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.113539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.113732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.113767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.113962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.114152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.114180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.114385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.114570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.114595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.114795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.114968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.114996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.115159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.115345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.115370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.115566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.115745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.115770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.115970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.116134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.116160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.116351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.116518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.116544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.116745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.116950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.116977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.117143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.117303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.117327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.117487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.117659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.117685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.117836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.118003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.118030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.118200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.118401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.118427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.118606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.118763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.118788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.118968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.119134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.119160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.119330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.119501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.119527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.119724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.119894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.119920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.120158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.120322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.120348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.896 qpair failed and we were unable to recover it. 00:22:25.896 [2024-05-15 02:39:13.120505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.896 [2024-05-15 02:39:13.120682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.120707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.120887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.121057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.121084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.121270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.121432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.121457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.121677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.121844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.121869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.122072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.122233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.122261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.122483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.122659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.122683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.122838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.123034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.123061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.123228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.123467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.123493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.123685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.123852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.123877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.124083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.124245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.124271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.124485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.124651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.124676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.124867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.125040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.125066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.125229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.125426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.125453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.125616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.125804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.125830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.125998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.126194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.126219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.126443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.126632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.126659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.126854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.127041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.127067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.127258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.127460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.127485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.127647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.127858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.127883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.128096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.128307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.128334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.128515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.128712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.128737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.128893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.129088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.129114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.129292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.129490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.129516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.129701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.129867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.129892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.130069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.130223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.130249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.130437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.130629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.130654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.130841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.131009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.131037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.131201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.131398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.131423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.131589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.131804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.131829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.897 [2024-05-15 02:39:13.131997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.132186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.897 [2024-05-15 02:39:13.132211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.897 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.132376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.132563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.132589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.132751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.132914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.132954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.133120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.133279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.133306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.133493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.133706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.133731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.133893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.134082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.134108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.134281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.134448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.134473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.134628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.134819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.134845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.135020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.135212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.135236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.135428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.135617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.135643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.135812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.135981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.136007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.136174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.136356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.136381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.136591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.136793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.136818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.136989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.137184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.137210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.137397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.137590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.137616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.137810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.137980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.138008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.138203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.138370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.138395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.138563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.138731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.138756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.138942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.139112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.139139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.139341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.139498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.139524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.139682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.139842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.139867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.140064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.140227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.140254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.140424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.140618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.140643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.140807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.140998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.141024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.141204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.141364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.141388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.898 [2024-05-15 02:39:13.141546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.141761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.898 [2024-05-15 02:39:13.141786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.898 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.141965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.142162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.142188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.142369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.142523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.142549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.142720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.142946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.142973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.143158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.143308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.143333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.143534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.143704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.143729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.143898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.144117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.144144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.144330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.144518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.144544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.144709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.144866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.144891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.145058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.145252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.145279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.145448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.145661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.145686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.145850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.146023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.146049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.146235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.146409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.146435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.146604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.146789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.146815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.146973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.147153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.147178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.147377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.147595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.147621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.147812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.147977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.148004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.148210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.148367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.148393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.148554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.148743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.148770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.148959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.149140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.149165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.149363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.149526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.149551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.149706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.149898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.149923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.150106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.150267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.150293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.150449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.150607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.150632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.150817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.150986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.151012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.151174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.151337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.151364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.151537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.151721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.151747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.151948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.152125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.152150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.152351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.152546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.152571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.152735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.152905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.152949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.899 qpair failed and we were unable to recover it. 00:22:25.899 [2024-05-15 02:39:13.153126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.899 [2024-05-15 02:39:13.153306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.153331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.153542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.153742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.153771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.153958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.154119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.154144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.154322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.154492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.154516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.154676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.154847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.154872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.155046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.155214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.155240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.155427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.155613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.155639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.155818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.155981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.156007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.156162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.156358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.156382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.156552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.156713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.156739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.156900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.157102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.157129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.157300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.157465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.157494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.157663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.157859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.157885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.158055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.158241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.158266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.158430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.158601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.158625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.158791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.158980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.159007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.159197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.159355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.159380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.159543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.159706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.159731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.159890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.160062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.160088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.160253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.160450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.160476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.160645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.160796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.160821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.160994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.161172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.161202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.161394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.161573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.161598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.161763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.161941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.161970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.162141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.162298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.162323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.162484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.162638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.162663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.162858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.163045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.163072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.163235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.163393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.163420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.163615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.163808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.163833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.163998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.164190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.900 [2024-05-15 02:39:13.164216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.900 qpair failed and we were unable to recover it. 00:22:25.900 [2024-05-15 02:39:13.164397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.164556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.164583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.164754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.164940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.164972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.165160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.165374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.165398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.165561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.165716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.165741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.166001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.166188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.166213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.166409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.166571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.166597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.166757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.166920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.166952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.167137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.167301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.167327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.167512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.167701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.167726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.167884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.168043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.168069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.168265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.168458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.168485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.168648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.168811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.168837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.169025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.169213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.169238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.169398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.169578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.169604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.169766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.169922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.169953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.170121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.170282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.170307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.170486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.170695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.170721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.170919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.171085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.171110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.171272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.171443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.171468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.171632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.171792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.171817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.172001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.172162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.172187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.172348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.172540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.172565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.172727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.172909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.172942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.173109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.173274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.173299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.173466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.173627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.173654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.173815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.173982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.174010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.174172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.174354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.174379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.174573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.174765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.174792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.174971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.175152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.175177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.901 [2024-05-15 02:39:13.175365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.175567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.901 [2024-05-15 02:39:13.175592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.901 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.175789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.175971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.175998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.176157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.176354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.176381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.176548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.176719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.176745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.176909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.177084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.177111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.177279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.177462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.177488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.177690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.177849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.177874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.178061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.178225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.178250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.178413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.178571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.178597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.178783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.178966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.179001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.179187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.179352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.179380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.179546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.179742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.179767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.179949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.180105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.180130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.180312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.180479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.180503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.180664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.180875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.180900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.181093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.181289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.181314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.181470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.181661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.181686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.181854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.182026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.182052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.182273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.182441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.182466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.182661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.182847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.182872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.183030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.183218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.183243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.183406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.183618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.183644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.183809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.183966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.183992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.184164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.184328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.184353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.184506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.184726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.184752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.184944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.185108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.185135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.902 qpair failed and we were unable to recover it. 00:22:25.902 [2024-05-15 02:39:13.185359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.902 [2024-05-15 02:39:13.185545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.185570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.185755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.185951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.185978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.186139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.186309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.186334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.186499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.186695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.186719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.186890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.187061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.187087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.187244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.187402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.187427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.187613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.187781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.187807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.188003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.188192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.188217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.188374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.188539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.188564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.188727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.188918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.188950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.189138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.189325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.189350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.189552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.189718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.189743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.189900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.190118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.190143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.190312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.190478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.190504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.190690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.190842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.190867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.191031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.191248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.191274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.191444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.191660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.191686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.191853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.192015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.192041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.192232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.192401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.192426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.192604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.192771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.192796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.192961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.193159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.193186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.193381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.193573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.193598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.193769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.193962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.193988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.194144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.194318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.194343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.194502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.194693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.194718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.194900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.195114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.195140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.195328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.195523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.195549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.195720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.195941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.195967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.196125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.196319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.196344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.903 qpair failed and we were unable to recover it. 00:22:25.903 [2024-05-15 02:39:13.196533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.196719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.903 [2024-05-15 02:39:13.196745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.196959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.197127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.197154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.197350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.197504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.197529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.197690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.197860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.197887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.198061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.198255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.198280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.198452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.198607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.198633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.198821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.198984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.199012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.199179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.199368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.199394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.199553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.199739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.199765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.199937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.200104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.200129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.200297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.200463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.200488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.200707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.200872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.200897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.201103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.201268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.201294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.201447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.201613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.201637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.201833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.202026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.202053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.202251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.202409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.202435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.202596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.202768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.202795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.202977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.203146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.203173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.203350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.203539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.203565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.203743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.203922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.203953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.204141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.204319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.204344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.204533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.204688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.204713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.204911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.205081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.205107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.205278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.205457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.205482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.205695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.205875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.205900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.206098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.206262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.206288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.206451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.206604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.206629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.206823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.207020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.207046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.207204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.207373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.207399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.207582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.207741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.207767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.904 qpair failed and we were unable to recover it. 00:22:25.904 [2024-05-15 02:39:13.207962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.904 [2024-05-15 02:39:13.208151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.208176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.208369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.208558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.208583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.208755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.208955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.208983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.209176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.209349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.209375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.209568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.209754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.209779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.209943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.210138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.210164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.210327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.210515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.210541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.210709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.210874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.210900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.211069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.211274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.211299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.211454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.211619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.211646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.211849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.212021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.212047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.212219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.212408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.212433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.212624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.212812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.212837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.213026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.213224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.213250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.213408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.213622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.213647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.213839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.213996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.214022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.214184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.214373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.214399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.214555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.214742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.214767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.214947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.215142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.215171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.215362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.215553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.215579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.215800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.215959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.215986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.216157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.216328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.216353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.216543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.216738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.216763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.216988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.217153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.217179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.217356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.217541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.217567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.217735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.217919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.217950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.218140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.218307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.218332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.218522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.218682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.218707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.218873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.219041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.219071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.905 [2024-05-15 02:39:13.219267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.219456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.905 [2024-05-15 02:39:13.219481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.905 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.219700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.219886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.219912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.220117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.220276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.220301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.220466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.220659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.220683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.220858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.221060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.221088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.221278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.221439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.221465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.221629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.221844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.221870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.222060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.222224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.222248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.222434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.222599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.222625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.222785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.222977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.223009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.223177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.223345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.223371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.223542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.223733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.223758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.223918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.224084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.224108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.224289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.224473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.224499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.224665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.224855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.224880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.225072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.225239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.225264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.225456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.225669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.225694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.225879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.226050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.226077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.226241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.226439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.226465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.226629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.226797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.226826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.227018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.227219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.227244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.227407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.227561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.227586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.227750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.227940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.227965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.228152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.228318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.228343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.228513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.228701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.228725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.228914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.229114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.229140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.229342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.229553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.229579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.229742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.229894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.229919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.230094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.230249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.230274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.230430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.230629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.230655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.906 qpair failed and we were unable to recover it. 00:22:25.906 [2024-05-15 02:39:13.230855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.231054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.906 [2024-05-15 02:39:13.231080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.231254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.231411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.231435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.231596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.231759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.231784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.231947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.232125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.232150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.232342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.232532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.232557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.232722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.232910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.232947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.233133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.233331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.233356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.233517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.233705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.233731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.233884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.234036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.234062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.234263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.234428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.234453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.234647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.234816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.234841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.235031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.235195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.235220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.235399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.235588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.235612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.235766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.235936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.235961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.236127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.236312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.236337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.236495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.236682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.236707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.236898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.237065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.237091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.237272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.237427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.237452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.237616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.237837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.237862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.238031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.238225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.238249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.238417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.238606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.238632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.238790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.238984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.239010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.239191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.239411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.239437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.239608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.239799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.239825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.239996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.240160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.240186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.240338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.240502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.240528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.907 qpair failed and we were unable to recover it. 00:22:25.907 [2024-05-15 02:39:13.240682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.907 [2024-05-15 02:39:13.240878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.240904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.241102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.241285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.241311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.241523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.241705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.241731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.241899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.242073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.242098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.242260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.242417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.242443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.242614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.242789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.242813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.243023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.243210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.243235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.243412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.243636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.243661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.243847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.244034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.244060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.244220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.244382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.244408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.244564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.244721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.244746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.244954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.245142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.245167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.245383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.245569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.245593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.245751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.245905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.245938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.246137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.246299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.246325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.246503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.246692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.246717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.246879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.247060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.247086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.247253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.247443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.247469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.247634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.247800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.247825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.247996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.248162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.248187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.248377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.248552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.248578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.248762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.248949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.248975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.249131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.249289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.249315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.249471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.249637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.249664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.249827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.250017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.250044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.250201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.250384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.250409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.250598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.250756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.250781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.250942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.251111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.251137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.251315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.251474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.251499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.908 [2024-05-15 02:39:13.251690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.251877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.908 [2024-05-15 02:39:13.251902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.908 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.252091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.252251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.252276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.252471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.252642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.252668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.252889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.253114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.253139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.253302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.253470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.253496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.253663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.253829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.253854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.254023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.254192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.254217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.254385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.254545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.254572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.254784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.254954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.254982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.255175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.255334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.255359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.255556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.255716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.255742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.255939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.256101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.256128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.256291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.256486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.256513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.256705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.256864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.256889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.257064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.257233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.257258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.257476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.257666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.257692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.257877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.258044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.258069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.258256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.258413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.258438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.258653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.258841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.258867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.259029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.259198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.259222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.259411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.259591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.259616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.259773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.259938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.259964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.260134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.260329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.260355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.260532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.260697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.260724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.260886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.261080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.261106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.261288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.261518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.261543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.261708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.261896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.261920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.262092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.262278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.262303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.262475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.262658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.262682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.262870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.263032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.263059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.909 qpair failed and we were unable to recover it. 00:22:25.909 [2024-05-15 02:39:13.263247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.909 [2024-05-15 02:39:13.263437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.263463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.263621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.263776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.263801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.263993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.264156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.264181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.264392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.264583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.264610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.264776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.264970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.264996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.265151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.265338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.265363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.265547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.265741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.265766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.265950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.266105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.266130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.266302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.266459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.266483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.266683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.266847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.266873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.267032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.267185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.267210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.267369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.267538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.267566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.267753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.267945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.267971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.268142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.268325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.268350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.268507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.268660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.268685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.268854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.269045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.269071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.269229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.269388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.269415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.269605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.269773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.269797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.270010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.270198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.270223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.270411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.270581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.270605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.270766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.270927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.270961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.271128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.271286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.271312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.271507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.271697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.271723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.271883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.272040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.272066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.272234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.272400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.272425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.272620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.272796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.272822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.272984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.273144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.273171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.273354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.273565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.273590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.273757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.273940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.273966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.274157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.274371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.910 [2024-05-15 02:39:13.274396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.910 qpair failed and we were unable to recover it. 00:22:25.910 [2024-05-15 02:39:13.274588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.274746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.274771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.274938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.275100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.275125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.275305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.275463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.275489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.275676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.275839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.275864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.276028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.276190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.276217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.276371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.276563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.276592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.276791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.276968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.276995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.277190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.277352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.277377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.277542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.277728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.277754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.277920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.278115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.278141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.278299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.278485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.278511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.278675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.278835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.278859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.279042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.279234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.279258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.279423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.279609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.279634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.279791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.279954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.279981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.280175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.280331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.280363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.280558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.280747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.280772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.280940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.281144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.281169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.281350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.281532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.281557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.281747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.281957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.281984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.282144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.282300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.911 [2024-05-15 02:39:13.282325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:25.911 qpair failed and we were unable to recover it. 00:22:25.911 [2024-05-15 02:39:13.282498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.197 [2024-05-15 02:39:13.282660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.197 [2024-05-15 02:39:13.282686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.197 qpair failed and we were unable to recover it. 00:22:26.197 [2024-05-15 02:39:13.282901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.197 [2024-05-15 02:39:13.283093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.197 [2024-05-15 02:39:13.283118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.197 qpair failed and we were unable to recover it. 00:22:26.197 [2024-05-15 02:39:13.283276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.197 [2024-05-15 02:39:13.283437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.197 [2024-05-15 02:39:13.283463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.283653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.283845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.283870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.284063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.284258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.284288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.284478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.284646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.284672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.284832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.284998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.285024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.285191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.285350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.285376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.285539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.285721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.285746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.285915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.286080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.286105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.286271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.286426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.286451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.286626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.286809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.286834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.287035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.287211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.287236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.287398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.287579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.287603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.287791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.287948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.287979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.288144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.288331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.288355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.288524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.288719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.288746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.288915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.289088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.289113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.289272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.289442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.289468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.289621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.289838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.289863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.290055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.290237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.290262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.290478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.290645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.290672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.290862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.291019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.291044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.291224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.291393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.291418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.291577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.291739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.291764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.291942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.292134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.292158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.292344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.292505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.292530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.292722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.292941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.292968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.293139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.293300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.293325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.293523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.293705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.293730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.293924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.294101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.294127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.294347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.294526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.294552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.294755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.294940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.294965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.295122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.295287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.295312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.295508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.295696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.295721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.198 qpair failed and we were unable to recover it. 00:22:26.198 [2024-05-15 02:39:13.295889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.296080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.198 [2024-05-15 02:39:13.296106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.296260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.296449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.296476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.296664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.296856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.296881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.297078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.297241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.297266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.297454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.297623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.297648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.297809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.297974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.298001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.298170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.298381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.298406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.298593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.298754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.298779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.298969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.299161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.299187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.299374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.299536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.299561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.299729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.299917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.299948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.300144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.300308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.300334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.300496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.300656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.300682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.300854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.301021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.301048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.301203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.301397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.301421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.301605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.301759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.301784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.302000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.302186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.302210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.302370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.302556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.302581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.302744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.302910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.302940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.303109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.303297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.303323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.303519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.303683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.303708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.303928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.304121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.304146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.304316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.304505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.304531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.304710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.304899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.199 [2024-05-15 02:39:13.304923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.199 qpair failed and we were unable to recover it. 00:22:26.199 [2024-05-15 02:39:13.305094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.305281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.305307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.305494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.305684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.305708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.305864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.306061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.306087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.306255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.306451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.306478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.306663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.306825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.306851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.307014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.307187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.307212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.307388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.307545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.307571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.307741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.307904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.307936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.308126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.308292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.308317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.308505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.308691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.308716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.308884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.309077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.309102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.309264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.309451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.309477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.309655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.309822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.309847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.310021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.310188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.310213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.310397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.310554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.310579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.310734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.310926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.310957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.311130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.311296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.311321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.311541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.311693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.311718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.311882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.312048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.312076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.312265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.312424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.312448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.312646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.312805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.312830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.313004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.313165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.313189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.313343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.313499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.313524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.313709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.313867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.313892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.314116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.314283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.314309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.314508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.314669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.314694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.314864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.315037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.315065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.315233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.315443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.315468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.315665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.315883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.315908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.316094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.316258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.316285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.316448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.316651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.316677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.200 [2024-05-15 02:39:13.316834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.316995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.200 [2024-05-15 02:39:13.317020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.200 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.317193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.317348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.317373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.317549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.317736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.317760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.317946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.318140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.318167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.318387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.318545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.318569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.318742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.318910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.318942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.319139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.319329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.319354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.319512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.319697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.319723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.319916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.320113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.320138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.320333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.320499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.320525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.320707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.320887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.320912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.321117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.321306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.321332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.321550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.321737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.321762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.321925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.322136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.322162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.322331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.322546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.322572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.322790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.322967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.322994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.323190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.323379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.323405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.323562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.323718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.323743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.323924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.324127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.324153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.324345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.324525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.324551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.324738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.324914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.324944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.325111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.325290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.325315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.325497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.325684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.325710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.325866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.326024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.326050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.326243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.326437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.326464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.326650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.326815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.326841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.327038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.327197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.327223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.327395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.327550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.327576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.327750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.327954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.327979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.328145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.328301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.328327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.328514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.328696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.328722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.328887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.329075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.329104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.329302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.329468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.329493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.329679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.329893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.329918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.330097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.330277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.330302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.330481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.330649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.330676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.330868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.331030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.331057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.331247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.331428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.331453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.331645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.331810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.331836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.332024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.332186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.332213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.332371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.332533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.332558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.332725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.332881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.332906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.333080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.333245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.333270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.333461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.333633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.333660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.333819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.334006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.334032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.334196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.334390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.334420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.201 qpair failed and we were unable to recover it. 00:22:26.201 [2024-05-15 02:39:13.334615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.201 [2024-05-15 02:39:13.334782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.334807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.334975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.335198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.335225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.335408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.335561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.335587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.335754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.335943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.335969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.336137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.336303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.336329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.336515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.336712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.336738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.336924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.337095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.337120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.337307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.337475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.337501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.337698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.337863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.337890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.338057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.338256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.338288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.338450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.338659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.338684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.338860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.339025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.339050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.339211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.339373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.339399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.339587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.339775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.339801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.339994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.340187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.340212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.340365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.340540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.340565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.340730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.340909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.340939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.341127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.341287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.341312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.341533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.341723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.341748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.341917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.342096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.342126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.342286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.342452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.342478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.342663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.342857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.342883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.343068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.343239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.343265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.343459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.343619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.343645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.343811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.343993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.344019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.344210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.344404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.344430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.344612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.344777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.344803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.345024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.345191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.345218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.345415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.345602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.345627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.345816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.345979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.346009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.346172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.346334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.346361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.346533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.346718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.346745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.346925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.347102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.347128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.347290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.347482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.347508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.347674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.347841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.347867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.348071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.348241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.348268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.348438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.348594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.348620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.348785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.348969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.348994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.349148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.349357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.349384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.349572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.349764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.349789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.349962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.350127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.350153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.350324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.350542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.350568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.350750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.350917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.350955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.351148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.351335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.351360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.351544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.351699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.351725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.351913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.352090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.352115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.352282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.352445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.352472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.352659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.352822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.352846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.353012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.353202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.353228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.353408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.353560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.353584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.353807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.353995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.354021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.354195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.354384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.354410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.354606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.354767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.354793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.354957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.355127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.355152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.355319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.355478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.355503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.355662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.355848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.355872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.202 qpair failed and we were unable to recover it. 00:22:26.202 [2024-05-15 02:39:13.356027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.202 [2024-05-15 02:39:13.356186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.356211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.356428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.356579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.356605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.356792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.356975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.357002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.357202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.357392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.357417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.357596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.357781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.357807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.357979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.358143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.358169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.358363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.358522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.358547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.358705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.358897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.358922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.359100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.359274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.359300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.359464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.359649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.359674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.359830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.360022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.360048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.360225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.360393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.360417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.360632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.360824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.360849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.361047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.361232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.361257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.361449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.361642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.361667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.361832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.362006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.362033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.362227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.362390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.362415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.362579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.362748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.362774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.362966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.363128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.363153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.363351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.363513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.363540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.363701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.363885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.363911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.364106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.364269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.364294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.364517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.364707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.364732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.364894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.365092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.365119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.365285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.365471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.365497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.365667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.365828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.365854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.366078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.366238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.366264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.366418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.366569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.366594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.366755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.366939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.366966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.367127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.367291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.367316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.367521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.367765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.367791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.367974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.368140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.368166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.368338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.368503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.368528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.368688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.368852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.368877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.369072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.369255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.369281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.369456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.369607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.369632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.369794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.369964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.369990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.370175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.370370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.370397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.370587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.370750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.370777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.370951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.371133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.371159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.371390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.371580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.371606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.371793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.371956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.371983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.372144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.372321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.372348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.372528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.372688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.372713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.372883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.373084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.373110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.373294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.373459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.373487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.373667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.373832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.373859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.374051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.374211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.374238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.374427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.374617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.374644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.374808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.374971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.374998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.375187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.375371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.375396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.375565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.375722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.375747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.375908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.376072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.376098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.376256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.376446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.376471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.376640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.376800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.376826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.377024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.377180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.377206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.203 qpair failed and we were unable to recover it. 00:22:26.203 [2024-05-15 02:39:13.377397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.203 [2024-05-15 02:39:13.377620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.377645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.377832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.377996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.378023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.378216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.378405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.378430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.378591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.378778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.378804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.378969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.379158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.379183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.379372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.379542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.379570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.379734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.379918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.379949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.380164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.380331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.380357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.380556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.380725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.380752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.380940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.381130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.381157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.381378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.381539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.381563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.381754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.381923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.381955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.382116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.382319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.382344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.382506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.382715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.382739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.382893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.383099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.383126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.383296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.383462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.383488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.383657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.383821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.383846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.384034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.384192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.384218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.384405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.384568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.384594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.384778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.384945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.384971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.385149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.385313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.385340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.385539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.385726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.385751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.385922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.386082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.386107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.386266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.386456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.386480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.386677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.386864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.386890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.387059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.387219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.387245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.387405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.387626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.387651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.387838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.388054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.388080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.388253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.388446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.388473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.388639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.388822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.388847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.389045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.389215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.389240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.389434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.389626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.389652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.389823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.389995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.390021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.390188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.390372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.390397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.390615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.390772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.390798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.390973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.391159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.391184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.391400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.391568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.391594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.391787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.391972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.391999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.392166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.392400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.392425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.392638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.392820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.392845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.393018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.393186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.393211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.393379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.393544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.393570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.393769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.393962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.393988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.394150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.394339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.394365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.394536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.394695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.394720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.394892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.395089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.395114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.395278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.395458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.395483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.395646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.395831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.395855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.396033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.396250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.396282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.396472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.396631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.396656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.396854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.397042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.397068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.204 [2024-05-15 02:39:13.397257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.397420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.204 [2024-05-15 02:39:13.397446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.204 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.397607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.397789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.397815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.398009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.398167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.398192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.398355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.398546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.398572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.398733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.398924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.398962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.399133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.399295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.399320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.399513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.399675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.399700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.399866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.400064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.400095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.400260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.400442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.400467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.400647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.400809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.400834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.401006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.401168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.401193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.401372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.401528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.401553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.401740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.401904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.401936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.402103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.402296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.402321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.402493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.402744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.402769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.402960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.403150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.403176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.403340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.403555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.403579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.403740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.403906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.403944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.404122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.404285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.404310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.404473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.404640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.404665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.404859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.405020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.405046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.405210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.405362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.405387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.405576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.405746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.405772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.405970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.406123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.406149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.406316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.406521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.406546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.406699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.406865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.406891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.407063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.407249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.407273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.407459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.407640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.407670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.407834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.408050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.408076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.408239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.408409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.408435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.408624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.408818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.408843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.409003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.409220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.409244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.409407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.409589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.409614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.409776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.409972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.409998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.410164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.410377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.410402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.410591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.410767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.410792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.410994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.411188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.411214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.411379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.411538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.411563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.411759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.411926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.411956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.412169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.412329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.412354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.412518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.412737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.412763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.412920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.413102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.413127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.413285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.413455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.413482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.413646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.413866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.413892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.414078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.414265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.414291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.414475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.414662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.414687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.414850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.415007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.415034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.415234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.415407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.415434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.415624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.415817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.415843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.416010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.416187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.416213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.416382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.416540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.416565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.416760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.416916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.416947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.417121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.417277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.417303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.417491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.417686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.417713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.417916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.418114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.418140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.418306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.418499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.418525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.418710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.418876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.418902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.205 qpair failed and we were unable to recover it. 00:22:26.205 [2024-05-15 02:39:13.419074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.419278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.205 [2024-05-15 02:39:13.419304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.419490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.419648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.419673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.419837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.420005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.420031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.420245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.420443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.420468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.420655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.420838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.420863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.421025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.421221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.421247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.421406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.421599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.421625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.421786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.421952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.421980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.422145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.422363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.422388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.422555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.422742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.422767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.422968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.423153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.423178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.423349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.423505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.423530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.423696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.423858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.423884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.424081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.424269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.424294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.424478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.424644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.424671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.424856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.425075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.425102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.425264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.425435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.425461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.425617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.425774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.425799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.425990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.426156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.426183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.426363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.426524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.426549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.426708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.426861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.426887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.427069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.427221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.427247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.427441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.427600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.427626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.427814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.427975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.428002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.428215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.428398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.428423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.428593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.428782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.428807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.428969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.429137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.429164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.429320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.429486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.429513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.429697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.429913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.429944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.430115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.430278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.430303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.430498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.430702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.430728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.430919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.431120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.431145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.431330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.431544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.431570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.431735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.431920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.431963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.432158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.432327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.432353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.432533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.432695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.432720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.432921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.433090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.433116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.433312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.433503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.433529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.433695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.433883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.433908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.434113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.434280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.434305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.434474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.434663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.434688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.434864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.435084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.435111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.435293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.435485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.435510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.435692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.435850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.435876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.436072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.436263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.436289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.436479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.436633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.436658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.436828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.437014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.437041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.437207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.437407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.437432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.437588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.437756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.437781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.437974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.438140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.438165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.438326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.438536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.438562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.438754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.438944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.438971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.206 qpair failed and we were unable to recover it. 00:22:26.206 [2024-05-15 02:39:13.439127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.206 [2024-05-15 02:39:13.439283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.439308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.439497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.439674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.439699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.439863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.440043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.440070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.440225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.440391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.440417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.440579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.440767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.440792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.440966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.441128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.441154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.441319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.441486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.441513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.441705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.441882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.441907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.442116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.442303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.442328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.442516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.442679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.442704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.442864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.443047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.443074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.443234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.443417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.443442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.443594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.443805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.443831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.444002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.444162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.444187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.444347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.444534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.444560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.444718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.444873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.444899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.445088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.445258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.445283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.445442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.445610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.445635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.445822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.446016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.446042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.446231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.446416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.446442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.446629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.446784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.446810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.447006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.447167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.447193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.447358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.447555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.447580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.447789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.447952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.447978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.448149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.448320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.448348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.448571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.448763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.448789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.448969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.449132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.449158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.449334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.449489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.449514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.449708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.449876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.449902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.450093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.450280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.450311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.450528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.450722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.450748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.450906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.451085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.451111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.451304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.451499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.451525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.451682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.451843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.451868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.452064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.452246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.452272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.452430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.452592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.452618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.452786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.452998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.453025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.453188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.453408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.453434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.453598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.453757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.453782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.453973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.454138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.454164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.454327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.454487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.454514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.454671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.454831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.454856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.455026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.455215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.455247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.455433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.455592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.455619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.455792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.455962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.455988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.456144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.456312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.456338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.456544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.456728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.456753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.456947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.457135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.457160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.457354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.457518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.457546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.457723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.457889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.457919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.458117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.458307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.458333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.458492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.458661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.458686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.458874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.459056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.459082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.459269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.459457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.459483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.459653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.459848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.459873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.460045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.460258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.460283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.207 qpair failed and we were unable to recover it. 00:22:26.207 [2024-05-15 02:39:13.460450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.460610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.207 [2024-05-15 02:39:13.460635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.460826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.461047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.461073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.461267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.461462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.461488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.461675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.461833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.461863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.462037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.462203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.462229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.462419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.462583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.462607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.462792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.462989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.463016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.463179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.463337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.463362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.463565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.463751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.463776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.463989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.464157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.464183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.464378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.464564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.464590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.464776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.464945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.464971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.465144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.465335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.465360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.465550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.465735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.465764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.465927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.466093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.466119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.466351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.466511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.466536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.466736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.466917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.466948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.467158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.467337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.467362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.467521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.467712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.467738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.467921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.468142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.468168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.468335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.468542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.468568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.468778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.468985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.469013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.469180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.469363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.469388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.469583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.469744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.469773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.469944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.470117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.470142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.470304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.470463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.470488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.470697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.470882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.470907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.471112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.471270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.471295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.471457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.471622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.471647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.471815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.472001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.472028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.472206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.472370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.472405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.472569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.472738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.472766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.472957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.473124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.473149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.473324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.473542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.473567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.473759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.473925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.473957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.474124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.474317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.474342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.474498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.474677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.474702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.474869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.475044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.475071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.475232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.475393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.475419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.475641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.475860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.475885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.476064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.476252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.476278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.476475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.476637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.476662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.476825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.476990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.477017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.477180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.477341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.477367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.477554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.477766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.477792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.478009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.478200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.478227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.478418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.478584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.478609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.478809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.478978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.479005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.479173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.479389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.479414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.479598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.479754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.479779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.479954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.480138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.480163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.480351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.480516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.480542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.208 qpair failed and we were unable to recover it. 00:22:26.208 [2024-05-15 02:39:13.480737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.480916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.208 [2024-05-15 02:39:13.480950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.481108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.481316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.481341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.481512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.481671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.481696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.481853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.482018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.482044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.482211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.482427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.482453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.482615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.482784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.482809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.482968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.483150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.483176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.483405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.483571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.483596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.483756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.483914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.483952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.484142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.484314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.484339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.484521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.484711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.484736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.484935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.485119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.485144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.485341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.485522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.485548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.485734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.485899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.485923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.486117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.486318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.486343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.486511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.486695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.486720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.486936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.487096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.487122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.487321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.487501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.487527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.487724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.487885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.487911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.488091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.488261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.488287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.488459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.488621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.488647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.488813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.488969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.488995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.489164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.489345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.489371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.489529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.489699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.489724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.489914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.490083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.490109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.490273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.490425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.490450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.490614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.490807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.490833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.491003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.491159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.491184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.491368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.491553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.491579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.491742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.491910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.491950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.492144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.492311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.492337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.492502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.492696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.492721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.492917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.493114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.493140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.493321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.493476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.493501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.493698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.493859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.493886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.494062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.494231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.494257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.494441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.494597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.494623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.494814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.494980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.495006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.495177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.495353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.495378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.495543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.495699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.495725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.495891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.496072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.496099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.496278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.496476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.496503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.496705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.496865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.496890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.497101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.497258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.497283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.497443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.497628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.497654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.497823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.497989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.498016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.498184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.498379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.498404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.498590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.498747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.498772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.498949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.499134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.499160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.499344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.499524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.499549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.499733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.499918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.499949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.500125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.500317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.500343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.500538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.500734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.500761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.500953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.501164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.501190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.501378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.501573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.501599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.501789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.501950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.501976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.209 [2024-05-15 02:39:13.502137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.502310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.209 [2024-05-15 02:39:13.502337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.209 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.502557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.502736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.502761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.502951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.503114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.503141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.503315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.503475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.503499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.503661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.503845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.503872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.504077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.504239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.504264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.504480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.504646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.504674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.504830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.504987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.505013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.505208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.505379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.505404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.505577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.505735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.505760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.505949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.506101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.506127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.506284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.506495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.506520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.506687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.506881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.506907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.507114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.507315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.507340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.507525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.507684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.507709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.507899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.508091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.508118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.508335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.508509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.508535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.508696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.508864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.508889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.509063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.509255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.509280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.509460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.509622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.509646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.509840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.510006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.510033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.510216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.510375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.510400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.510594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.510756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.510784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.510951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.511114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.511139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.511330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.511502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.511528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.511707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.511902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.511928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.512098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.512299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.512327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.512488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.512651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.512678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.512848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.513038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.513065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.513251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.513413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.513438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.513593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.513775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.513800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.513974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.514166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.514192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.514385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.514582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.514607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.514793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.514992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.515018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.515186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.515368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.515394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.515575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.515761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.515787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.515973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.516149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.516179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.516370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.516570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.516595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.516784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.516953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.516980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.517171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.517337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.517362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.517574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.517743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.517769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.517927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.518110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.518135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.518354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.518519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.518545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.518744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.518957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.518982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.519183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.519348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.519374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.519539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.519732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.519757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.519955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.520141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.520172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.520361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.520547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.520573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.520739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.520897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.520922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.521095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.521263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.521290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.521472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.521635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.521660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.521827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.521989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.522016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.522181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.522359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.522384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.522572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.522740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.522765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.522968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.523129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.523154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.523319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.523510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.523536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.523689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.523865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.523894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.210 qpair failed and we were unable to recover it. 00:22:26.210 [2024-05-15 02:39:13.524096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.210 [2024-05-15 02:39:13.524282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.524308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.524498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.524678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.524703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.524890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.525104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.525130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.525296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.525456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.525482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.525680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.525872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.525897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.526074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.526298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.526451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.526645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.526670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.526839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.527004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.527030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.527223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.527385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.527410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.527594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.527758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.527788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.527953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.528147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.528172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.528391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.528559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.528584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.528751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.528953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.528981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.529175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.529333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.529359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.529525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.529714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.529738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.529910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.530073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.530099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.530262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.530455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.530481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.530635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.530801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.530826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.530997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.531167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.531193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.531380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.531557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.531582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.531772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.531960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.531986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.532168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.532350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.532375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.532561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.532745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.532770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.532942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.533098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.533124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.533281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.533479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.533505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.533696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.533905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.533934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.534118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.534338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.534363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.534548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.534767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.534793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.534952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.535138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.535163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.535364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.535548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.535574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.535749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.535917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.535947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.536136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.536352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.536378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.536546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.536712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.536738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.536898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.537094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.537119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.537310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.537473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.537501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.537701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.537865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.537892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.538115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.538276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.538301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.538488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.538648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.538674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.538872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.539069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.539095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.539258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.539425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.539452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.539636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.539797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.539823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.539997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.540176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.540201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.540374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.540537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.540562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.540728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.540925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.540955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.541142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.541305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.541341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.541515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.541702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.541728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.541917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.542119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.542144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.542301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.542459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.542484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.542648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.542846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.542872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.543066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.543227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.543254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.211 qpair failed and we were unable to recover it. 00:22:26.211 [2024-05-15 02:39:13.543484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.211 [2024-05-15 02:39:13.543645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.543672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.543838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.544008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.544035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.544199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.544382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.544407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.544572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.544762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.544787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.544945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.545106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.545131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.545290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.545457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.545483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.545648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.545834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.545859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.546022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.546208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.546233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.546421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.546604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.546629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.546818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.547000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.547026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.547219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.547389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.547415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.547576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.547744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.547769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.547940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.548097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.548123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.548341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.548503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.548529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.548694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.548877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.548904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.549092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.549302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.549327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.549539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.549722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.549748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.549912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.550116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.550142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.550336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.550528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.550553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.550770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.550942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.550970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.551146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.551326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.551351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.551519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.551675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.551701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.551854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.552050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.552077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.552289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.552454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.552481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.552665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.552826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.552852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.553052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.553239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.553264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.553448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.553631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.553657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.553878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.554038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.554065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.554260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.554451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.554476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.554667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.554834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.554860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.555024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.555185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.555210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.555402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.555585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.555610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.555827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.555992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.556020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.556215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.556394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.556420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.556594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.556784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.556811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.557004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.557195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.557222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.557400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.557612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.557638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.557822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.558004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.558031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.558184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.558362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.558388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.558576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.558742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.558767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.558943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.559146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.559172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.559355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.559537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.559563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.559733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.559889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.559915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.560084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.560273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.560299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.560492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.560654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.560680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.560858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.561024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.561051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.561239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.561401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.561427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.561616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.561777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.561803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.561968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.562130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.562159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.562323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.562483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.562509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.562712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.562901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.562927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.563123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.563289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.563316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.563475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.563641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.563667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.563879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.564049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.564076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.564258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.564474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.564499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.212 qpair failed and we were unable to recover it. 00:22:26.212 [2024-05-15 02:39:13.564682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.212 [2024-05-15 02:39:13.564845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.564871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.565064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.565253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.565278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.565433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.565601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.565627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.565820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.565993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.566018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.566208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.566409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.566434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.566635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.566829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.566854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.567022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.567214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.567240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.567404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.567589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.567615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.567802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.567973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.568000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.568182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.568370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.568395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.568558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.568798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.568823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.568988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.569209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.569240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.569428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.569584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.569609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.569830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.570016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.570042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.570243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.570429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.570454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.570637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.570830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.570856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.571044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.571208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.571238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.571426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.571589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.571615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.571788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.571952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.571977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.572164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.572356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.572381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.572564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.572745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.572770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.572942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.573129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.573155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.573326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.573514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.573540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.573699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.573859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.573884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.574046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.574222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.574248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.574410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.574593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.574618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.574799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.574981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.575007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.575174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.575365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.575390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.575570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.575794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.575818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.575988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.576172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.576198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.576381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.576538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.576563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.576723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.576902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.576928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.577124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.577313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.577338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.577534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.577695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.577721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.577876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.578040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.578066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.578234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.578419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.578449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.578635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.578825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.578851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.579054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.579248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.579274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.579460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.579650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.579675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.579836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.580020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.580046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.580217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.580484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.580508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.580691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.580912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.580946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.581128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.581295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.581321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.581497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.581660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.581686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.581914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.582109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.582135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.582310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.582501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.582530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.582690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.582860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.582887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.583061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.583217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.583242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.583402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.583604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.583629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.583838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.584060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.584086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.584272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.584465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.584492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.213 [2024-05-15 02:39:13.584683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.584852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.213 [2024-05-15 02:39:13.584876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.213 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.585042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.585198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.585223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.585417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.585591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.585615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.585811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.585992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.586018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.586198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.586360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.586388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.586544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.586704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.586729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.586882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.587088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.587114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.587285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.587466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.587491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.587702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.587862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.587888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.588072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.588236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.588263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.588427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.588641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.588667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.588834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.588991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.589016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.589173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.589325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.589351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.589508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.589673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.589698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.589860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.590016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.590047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.590213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.590398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.590423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.590616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.590782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.590810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.591007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.591168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.591194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.591417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.591578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.591603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.591799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.591961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.591987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.592158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.592320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.592346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.592512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.592679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.592704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.592884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.593065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.593092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.593282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.593446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.214 [2024-05-15 02:39:13.593471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.214 qpair failed and we were unable to recover it. 00:22:26.214 [2024-05-15 02:39:13.593656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.479 [2024-05-15 02:39:13.593842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.479 [2024-05-15 02:39:13.593868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.479 qpair failed and we were unable to recover it. 00:22:26.479 [2024-05-15 02:39:13.594091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.479 [2024-05-15 02:39:13.594283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.479 [2024-05-15 02:39:13.594308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.479 qpair failed and we were unable to recover it. 00:22:26.479 [2024-05-15 02:39:13.594465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.479 [2024-05-15 02:39:13.594660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.479 [2024-05-15 02:39:13.594688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.479 qpair failed and we were unable to recover it. 00:22:26.479 [2024-05-15 02:39:13.594865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.595068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.595094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.595258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.595441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.595468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.595641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.595856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.595882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.596043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.596230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.596256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.596442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.596634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.596659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.596823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.596991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.597018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.597179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.597361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.597387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.597552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.597722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.597751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.597957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.480 [2024-05-15 02:39:13.598122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.598147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:26.480 [2024-05-15 02:39:13.598302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.480 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.480 [2024-05-15 02:39:13.598463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.598491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.480 [2024-05-15 02:39:13.598656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.598824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.598851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.599011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.599184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.599210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.599376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.599554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.599579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.599746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.599943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.599971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.600167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.600337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.600362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.600518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.600700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.600726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.600896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.601086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.601112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.601280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.601468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.601493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.601663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.601831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.601858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.480 qpair failed and we were unable to recover it. 00:22:26.480 [2024-05-15 02:39:13.602041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.480 [2024-05-15 02:39:13.602199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.602223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.602393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.602559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.602584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.602770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.602926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.602958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.603163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.603336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.603361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.603556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.603712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.603736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.603922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.604100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.604125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.604285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.604440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.604470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.604666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.604839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.604869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.605045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.605205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.605243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.605431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.605613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.605638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.605800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.605993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.606020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.606187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.606355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.606380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.606537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.606701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.606739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.606925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.607099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.607124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.607314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.607474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.607499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.607672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.607865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.607890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.608101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.608266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.608292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.608478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.608643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.608674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.608836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.609029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.609055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.609244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.609410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.609436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.609597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.609759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.609785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.481 qpair failed and we were unable to recover it. 00:22:26.481 [2024-05-15 02:39:13.609979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.481 [2024-05-15 02:39:13.610177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.610202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.610392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.610550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.610575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.610740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.610901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.610927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.611101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.611277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.611303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.611495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.611685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.611711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.611884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.612069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.612096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.612250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.612436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.612464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.612651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.612824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.612849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.613020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.613187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.613224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.613383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.613581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.613606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.613797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.613960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.613995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.614197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.614356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.614382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.614599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.614762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.614787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.614951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.615118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.615144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.615301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.615459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.615484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.615674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.615861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.615886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.616050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.616209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.616250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.616429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.616620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.616647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.616807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.616993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.617020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.617179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.617340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.617367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.617555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.617713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.617738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.482 qpair failed and we were unable to recover it. 00:22:26.482 [2024-05-15 02:39:13.617920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.482 [2024-05-15 02:39:13.618113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.618138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.618320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.618484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.618509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.618692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.618903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.618936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.619129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.619316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.619343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.483 [2024-05-15 02:39:13.619506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:26.483 [2024-05-15 02:39:13.619670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.619698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.483 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.483 [2024-05-15 02:39:13.619893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.620077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.620104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.620262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.620481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.620508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.620664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.620822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.620853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.621033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.621211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.621237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.621419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.621610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.621636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.621822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.622000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.622026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.622201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.622388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.622414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.622603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.622767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.622793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.622956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.623157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.623182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.623393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.623568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.623595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.623791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.623965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.623998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.624197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.624390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.624415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.624578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.624767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.624792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.625018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.625214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.625247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.483 qpair failed and we were unable to recover it. 00:22:26.483 [2024-05-15 02:39:13.625472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.483 [2024-05-15 02:39:13.625664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.625689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.625857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.626025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.626051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.626218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.626447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.626472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.626638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.626821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.626845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.627014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.627341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.627366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.627563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.627731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.627764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.627965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.628159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.628185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.628427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.628592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.628617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.628841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.629001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.629028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.629205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.629428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.629453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.629655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.629816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.629844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.630014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.630201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.630226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.630388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.630541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.630567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.630748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.630939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.630964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.631133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.631294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.631319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.631513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.631670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.631699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.631885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.632046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.632071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.632260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.632430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.632455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.632650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.632810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.632837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.633000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.633180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.633206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.633399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.633571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.484 [2024-05-15 02:39:13.633595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.484 qpair failed and we were unable to recover it. 00:22:26.484 [2024-05-15 02:39:13.633789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.633958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.633995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.634168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.634367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.634393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.634584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.634780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.634806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.634992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.635301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.635327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.635529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.635695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.635727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.635925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.636139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.636165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.636333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.636497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.636523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.636700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.636919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.636953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.637127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.637313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.637338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.637529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.637709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.637735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.637906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.638128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.638153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.638347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.638541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.638567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.638733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.638896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.638921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.639125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.639292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.639319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.639500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.639693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.639724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.639897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.640066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.640093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.640262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.640449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.640474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.640667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.640881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.640907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.641111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.641302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.641330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.641517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.641688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.641714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.485 qpair failed and we were unable to recover it. 00:22:26.485 [2024-05-15 02:39:13.641903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.642107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.485 [2024-05-15 02:39:13.642134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.642305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.642462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.642488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.642663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 Malloc0 00:22:26.486 [2024-05-15 02:39:13.642830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.642856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.643025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.486 [2024-05-15 02:39:13.643191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.643217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:26.486 [2024-05-15 02:39:13.643388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.486 [2024-05-15 02:39:13.643552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.486 [2024-05-15 02:39:13.643578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.643738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.643899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.643927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.644137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.644333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.644360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.644553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.644713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.644739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.644893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.645065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.645091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.645254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.645447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.645473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.645658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.645813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.645839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.646036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.646205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.646231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.646396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.646462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.486 [2024-05-15 02:39:13.646580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.646605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.646768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.646963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.486 [2024-05-15 02:39:13.646989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.486 qpair failed and we were unable to recover it. 00:22:26.486 [2024-05-15 02:39:13.647161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.647345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.647373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.647539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.647697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.647723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.647899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.648085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.648112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.648280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.648466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.648492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.648697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.648925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.648956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.649124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.649317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.649343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.649532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.649745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.649771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.649954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.650135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.650161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.650360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.650521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.650546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.650730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.650896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.650922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.651124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.651303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.651328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.651519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.651682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.651707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.651901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.652104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.652131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.652326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.652509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.652535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.652729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.652884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.652909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b50000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.653157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.653344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.653372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.653543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.653708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.653734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.653897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.654066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.654092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.654259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.654420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.654447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 [2024-05-15 02:39:13.654608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.487 [2024-05-15 02:39:13.654771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 [2024-05-15 02:39:13.654797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.487 qpair failed and we were unable to recover it. 00:22:26.487 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:26.487 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.487 [2024-05-15 02:39:13.654985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.487 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.488 [2024-05-15 02:39:13.655157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.655183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.655352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.655515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.655540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.655737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.655925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.655956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.656142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.656312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.656337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.656500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.656664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.656689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.656877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.657045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.657071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.657236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.657430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.657454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.657613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.657776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.657800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b58000b90 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.657990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.658169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.658198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.658367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.658535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.658562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.658733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.658915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.658951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.659146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.659312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.659339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.659559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.659739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.659765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.659939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.660107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.660134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.660306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.660495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.660520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.660700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.660864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.660888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.661132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.661301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.661327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.661542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.661700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.661725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.661896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.662102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.662132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.662329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.662503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 [2024-05-15 02:39:13.662529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.488 qpair failed and we were unable to recover it. 00:22:26.488 [2024-05-15 02:39:13.662687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.488 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.488 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:26.488 [2024-05-15 02:39:13.662852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.662877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.489 [2024-05-15 02:39:13.663046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.489 [2024-05-15 02:39:13.663266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.663294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.663469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.663655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.663680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.663841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.664034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.664060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.664222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.664384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.664409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.664568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.664752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.664777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.664981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.665172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.665197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.665403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.665599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.665624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.665789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.665975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.666001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.666196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.666358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.666384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.666749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.666775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.666952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.667121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.667146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.667341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.667505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.667530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.667745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.667958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.667984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.668149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.668342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.668368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.668588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.668780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.668805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.668981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.669140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.669165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.669345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.669542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.669567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.669755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.669913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.669944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.670135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.670324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.670350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.489 [2024-05-15 02:39:13.670536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.670695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.489 [2024-05-15 02:39:13.670719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.489 qpair failed and we were unable to recover it. 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.490 [2024-05-15 02:39:13.670885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.490 [2024-05-15 02:39:13.671072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.671098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.490 [2024-05-15 02:39:13.671263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.671433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.671458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.671644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.671834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.671859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.672042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.672260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.672285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.672517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.672673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.672698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.672859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.673049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.673076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.673260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.673414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.673439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.673651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.673838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.673865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.674065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.674233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.674260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.674428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.674508] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:26.490 [2024-05-15 02:39:13.674587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.490 [2024-05-15 02:39:13.674614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e420 with addr=10.0.0.2, port=4420 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.674766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.490 [2024-05-15 02:39:13.677307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.490 [2024-05-15 02:39:13.677490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.490 [2024-05-15 02:39:13.677518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.490 [2024-05-15 02:39:13.677534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.490 [2024-05-15 02:39:13.677547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.490 [2024-05-15 02:39:13.677581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.490 02:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 2403981 00:22:26.490 [2024-05-15 02:39:13.687114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.490 [2024-05-15 02:39:13.687294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.490 [2024-05-15 02:39:13.687322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.490 [2024-05-15 02:39:13.687337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.490 [2024-05-15 02:39:13.687350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.490 [2024-05-15 02:39:13.687378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.697121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.490 [2024-05-15 02:39:13.697319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.490 [2024-05-15 02:39:13.697347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.490 [2024-05-15 02:39:13.697362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.490 [2024-05-15 02:39:13.697374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.490 [2024-05-15 02:39:13.697403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.490 qpair failed and we were unable to recover it. 00:22:26.490 [2024-05-15 02:39:13.707099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.490 [2024-05-15 02:39:13.707267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.490 [2024-05-15 02:39:13.707294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.707309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.707321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.707350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.717123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.717290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.717317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.717332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.717344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.717372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.727157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.727333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.727359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.727374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.727392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.727421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.737142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.737312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.737339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.737355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.737367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.737396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.747184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.747357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.747383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.747398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.747410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.747438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.757214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.757406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.757432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.757446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.757459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.757486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.767256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.767436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.767462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.767476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.767489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.767517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.777282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.777461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.777487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.777502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.777514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.777542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.787266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.787435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.787461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.787476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.787488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.787516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.797308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.797486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.797511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.491 [2024-05-15 02:39:13.797525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.491 [2024-05-15 02:39:13.797538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.491 [2024-05-15 02:39:13.797565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.491 qpair failed and we were unable to recover it. 00:22:26.491 [2024-05-15 02:39:13.807363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.491 [2024-05-15 02:39:13.807527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.491 [2024-05-15 02:39:13.807553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.807568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.807581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.807610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.817429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.817587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.817613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.817627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.817645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.817674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.827386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.827551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.827577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.827592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.827604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.827632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.837415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.837584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.837611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.837625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.837638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.837666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.847479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.847645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.847672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.847687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.847699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.847727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.858110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.858293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.858319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.858336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.858352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.858381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.867580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.867745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.867770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.867786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.867798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.867826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.877632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.492 [2024-05-15 02:39:13.877797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.492 [2024-05-15 02:39:13.877824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.492 [2024-05-15 02:39:13.877839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.492 [2024-05-15 02:39:13.877851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.492 [2024-05-15 02:39:13.877880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.492 qpair failed and we were unable to recover it. 00:22:26.492 [2024-05-15 02:39:13.887649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.493 [2024-05-15 02:39:13.887823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.493 [2024-05-15 02:39:13.887851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.493 [2024-05-15 02:39:13.887866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.493 [2024-05-15 02:39:13.887878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.493 [2024-05-15 02:39:13.887907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.493 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.897604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.897780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.897807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.897823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.897838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.897866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.907649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.907857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.907884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.907905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.907919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.907953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.917704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.917874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.917900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.917915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.917928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.917964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.927689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.927856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.927881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.927896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.927909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.927943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.937727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.937893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.937919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.937941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.937955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.937984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.947720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.947886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.947913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.947927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.947947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.947975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.957772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.957942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.957967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.957982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.957994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.958022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.967811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.967985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.968010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.968026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.968038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.968066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.977827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.978001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.978026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.978041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.978053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.978081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.987848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.988028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.988054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.988069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.988084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.988113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:13.997875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:13.998044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:13.998069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.753 [2024-05-15 02:39:13.998092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.753 [2024-05-15 02:39:13.998105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.753 [2024-05-15 02:39:13.998134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.753 qpair failed and we were unable to recover it. 00:22:26.753 [2024-05-15 02:39:14.007886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.753 [2024-05-15 02:39:14.008083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.753 [2024-05-15 02:39:14.008109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.008124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.008136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.008164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.017921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.018091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.018117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.018132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.018144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.018173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.027948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.028119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.028144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.028158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.028171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.028198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.037951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.038111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.038137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.038151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.038164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.038193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.047981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.048143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.048168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.048182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.048194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.048223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.058028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.058191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.058216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.058231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.058244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.058272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.068072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.068240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.068265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.068280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.068293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.068321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.078074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.078240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.078264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.078279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.078291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.078320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.088195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.088382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.088413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.088429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.088441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.088469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.098136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.098299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.098324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.098340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.098352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.098380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.108161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.108326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.108352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.108367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.108379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.108408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.118197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.118362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.118389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.118405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.118420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.118449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.128219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.128390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.128416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.128431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.128444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.128472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.138240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.138405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.754 [2024-05-15 02:39:14.138431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.754 [2024-05-15 02:39:14.138446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.754 [2024-05-15 02:39:14.138459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.754 [2024-05-15 02:39:14.138488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.754 qpair failed and we were unable to recover it. 00:22:26.754 [2024-05-15 02:39:14.148330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.754 [2024-05-15 02:39:14.148547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.755 [2024-05-15 02:39:14.148573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.755 [2024-05-15 02:39:14.148588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.755 [2024-05-15 02:39:14.148601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.755 [2024-05-15 02:39:14.148630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.755 qpair failed and we were unable to recover it. 00:22:26.755 [2024-05-15 02:39:14.158279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.755 [2024-05-15 02:39:14.158441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.755 [2024-05-15 02:39:14.158466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.755 [2024-05-15 02:39:14.158481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.755 [2024-05-15 02:39:14.158493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:26.755 [2024-05-15 02:39:14.158521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.755 qpair failed and we were unable to recover it. 00:22:27.015 [2024-05-15 02:39:14.168337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.015 [2024-05-15 02:39:14.168500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.015 [2024-05-15 02:39:14.168526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.015 [2024-05-15 02:39:14.168545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.015 [2024-05-15 02:39:14.168557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.015 [2024-05-15 02:39:14.168585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.015 qpair failed and we were unable to recover it. 00:22:27.015 [2024-05-15 02:39:14.178379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.015 [2024-05-15 02:39:14.178572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.015 [2024-05-15 02:39:14.178603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.015 [2024-05-15 02:39:14.178619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.015 [2024-05-15 02:39:14.178631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.015 [2024-05-15 02:39:14.178659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.015 qpair failed and we were unable to recover it. 00:22:27.015 [2024-05-15 02:39:14.188402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.015 [2024-05-15 02:39:14.188575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.015 [2024-05-15 02:39:14.188603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.015 [2024-05-15 02:39:14.188622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.188635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.188663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.198468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.198638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.198668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.198683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.198695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.198723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.208451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.208612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.208637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.208652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.208664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.208692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.218494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.218659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.218685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.218700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.218712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.218745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.228493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.228675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.228700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.228715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.228727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.228755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.238570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.238734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.238759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.238774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.238786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.238816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.248575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.248777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.248804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.248820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.248836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.248865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.258603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.258775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.258801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.258816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.258829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.258857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.268705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.268873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.268904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.268919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.268939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.268969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.278629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.278799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.278825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.278839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.278851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.278879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.288660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.288820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.288845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.288860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.288873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.288901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.298682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.298849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.298874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.298889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.298901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.298937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.308748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.308918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.308952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.308968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.308980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.309013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.318745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.318914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.318948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.318964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.016 [2024-05-15 02:39:14.318977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.016 [2024-05-15 02:39:14.319005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.016 qpair failed and we were unable to recover it. 00:22:27.016 [2024-05-15 02:39:14.328770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.016 [2024-05-15 02:39:14.328941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.016 [2024-05-15 02:39:14.328968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.016 [2024-05-15 02:39:14.328983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.328995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.329023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.338799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.338966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.338992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.339007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.339020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.339048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.348826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.349043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.349069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.349084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.349096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.349130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.358840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.359009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.359041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.359056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.359068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.359097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.368876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.369046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.369072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.369087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.369100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.369128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.378900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.379097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.379124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.379138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.379151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.379180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.388979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.389152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.389178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.389193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.389205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.389234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.398962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.399136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.399161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.399175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.399193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.399222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.409007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.409206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.409232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.409246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.409259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.409287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.017 [2024-05-15 02:39:14.419014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.017 [2024-05-15 02:39:14.419175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.017 [2024-05-15 02:39:14.419201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.017 [2024-05-15 02:39:14.419215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.017 [2024-05-15 02:39:14.419227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.017 [2024-05-15 02:39:14.419255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.017 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.429058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.429235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.429261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.429277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.429298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.429328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.439103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.439269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.439296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.439310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.439323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.439351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.449207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.449373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.449399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.449417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.449430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.449458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.459125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.459340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.459366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.459380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.459392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.459420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.469192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.469360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.469386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.469401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.469413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.469441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.479255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.479466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.479491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.479506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.479518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.479546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.489256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.489439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.489466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.489481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.489498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.489527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.499257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.499452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.499477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.499492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.499505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.499533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.509292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.509511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.509537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.509552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.509567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.509595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.519314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.519485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.519510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.519525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.519537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.519566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.529333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.529536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.277 [2024-05-15 02:39:14.529562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.277 [2024-05-15 02:39:14.529577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.277 [2024-05-15 02:39:14.529589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.277 [2024-05-15 02:39:14.529617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.277 qpair failed and we were unable to recover it. 00:22:27.277 [2024-05-15 02:39:14.539374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.277 [2024-05-15 02:39:14.539539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.539566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.539581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.539593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.539621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.549476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.549641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.549666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.549681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.549693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.549721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.559494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.559712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.559738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.559756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.559769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.559797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.569507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.569673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.569699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.569714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.569729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.569757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.579515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.579676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.579702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.579717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.579734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.579763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.589525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.589692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.589718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.589733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.589745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.589774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.599580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.599744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.599771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.599785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.599798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.599826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.609555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.609715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.609741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.609756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.609768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.609796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.619592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.619753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.619779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.619794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.619806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.619834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.629623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.629795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.629821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.629836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.629848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.629875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.639665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.639835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.639862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.639878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.639890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.639918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.649732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.649900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.649924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.649950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.649963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.649992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.659714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.659913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.659949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.659968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.659980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.660009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.669753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.669923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.278 [2024-05-15 02:39:14.669958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.278 [2024-05-15 02:39:14.669980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.278 [2024-05-15 02:39:14.669993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.278 [2024-05-15 02:39:14.670022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.278 qpair failed and we were unable to recover it. 00:22:27.278 [2024-05-15 02:39:14.679788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.278 [2024-05-15 02:39:14.679958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.279 [2024-05-15 02:39:14.679984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.279 [2024-05-15 02:39:14.679999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.279 [2024-05-15 02:39:14.680011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.279 [2024-05-15 02:39:14.680039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.279 qpair failed and we were unable to recover it. 00:22:27.279 [2024-05-15 02:39:14.689789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.279 [2024-05-15 02:39:14.689994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.279 [2024-05-15 02:39:14.690022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.279 [2024-05-15 02:39:14.690037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.279 [2024-05-15 02:39:14.690049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.279 [2024-05-15 02:39:14.690078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.279 qpair failed and we were unable to recover it. 00:22:27.537 [2024-05-15 02:39:14.699871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.700103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.700130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.700145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.700158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.700186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.709875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.710084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.710111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.710129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.710142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.710170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.719904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.720080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.720106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.720121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.720133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.720161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.729949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.730110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.730135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.730150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.730163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.730191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.739980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.740147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.740173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.740188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.740200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.740228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.749997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.750164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.750190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.750204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.750216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.750244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.760019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.760194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.760219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.760243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.760256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.760284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.770055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.770290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.770316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.770330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.770343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.770371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.780171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.780337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.780363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.780378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.780393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.780421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.790234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.790405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.790431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.790445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.790457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.790486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.800170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.800364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.800390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.800405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.800417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.800444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.810166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.810345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.810370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.810385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.810397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.810425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.820241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.820408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.820434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.820448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.538 [2024-05-15 02:39:14.820461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.538 [2024-05-15 02:39:14.820489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.538 qpair failed and we were unable to recover it. 00:22:27.538 [2024-05-15 02:39:14.830337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.538 [2024-05-15 02:39:14.830501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.538 [2024-05-15 02:39:14.830526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.538 [2024-05-15 02:39:14.830540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.830553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.830580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.840260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.840425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.840451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.840466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.840478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.840506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.850301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.850486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.850512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.850532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.850546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.850574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.860304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.860529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.860555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.860570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.860582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.860610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.870356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.870559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.870585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.870599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.870611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.870639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.880357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.880541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.880566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.880581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.880593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.880621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.890376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.890540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.890565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.890580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.890592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.890621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.900479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.900640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.900666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.900681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.900693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.900721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.910507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.910720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.910745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.910760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.910773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.910801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.920501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.920681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.920707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.920722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.920734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.920762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.930490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.930649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.930675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.930690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.930703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.930731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.940549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.940717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.940748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.940764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.940777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.940805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.539 [2024-05-15 02:39:14.950587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.539 [2024-05-15 02:39:14.950846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.539 [2024-05-15 02:39:14.950875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.539 [2024-05-15 02:39:14.950892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.539 [2024-05-15 02:39:14.950904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.539 [2024-05-15 02:39:14.950950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.539 qpair failed and we were unable to recover it. 00:22:27.798 [2024-05-15 02:39:14.960562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.798 [2024-05-15 02:39:14.960730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.798 [2024-05-15 02:39:14.960755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.798 [2024-05-15 02:39:14.960770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.798 [2024-05-15 02:39:14.960783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.798 [2024-05-15 02:39:14.960810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.798 qpair failed and we were unable to recover it. 00:22:27.798 [2024-05-15 02:39:14.970591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.798 [2024-05-15 02:39:14.970754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.798 [2024-05-15 02:39:14.970780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.798 [2024-05-15 02:39:14.970795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.798 [2024-05-15 02:39:14.970807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.798 [2024-05-15 02:39:14.970835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.798 qpair failed and we were unable to recover it. 00:22:27.798 [2024-05-15 02:39:14.980668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.798 [2024-05-15 02:39:14.980865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.798 [2024-05-15 02:39:14.980890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:14.980904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:14.980916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:14.980957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:14.990697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:14.990869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:14.990894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:14.990909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:14.990922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:14.990958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.000701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.000870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.000895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.000910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.000922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.000957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.010741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.010909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.010945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.010962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.010974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.011002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.020769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.020947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.020972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.020987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.020999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.021028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.030782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.030957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.030988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.031004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.031017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.031045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.040829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.041010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.041036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.041050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.041062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.041091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.050810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.050977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.051012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.051027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.051039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.051068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.060897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.061133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.061159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.061173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.061186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.061214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.070936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.071105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.071130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.071145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.071157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.071191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.080922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.081090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.081115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.081130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.081142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.081170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.090998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.091187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.091213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.091228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.091240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.091269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.100952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.101112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.101138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.101152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.101165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.101193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.111023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.799 [2024-05-15 02:39:15.111195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.799 [2024-05-15 02:39:15.111224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.799 [2024-05-15 02:39:15.111240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.799 [2024-05-15 02:39:15.111252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.799 [2024-05-15 02:39:15.111280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.799 qpair failed and we were unable to recover it. 00:22:27.799 [2024-05-15 02:39:15.121031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.121199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.121238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.121254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.121267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.121295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.131056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.131221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.131247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.131261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.131274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.131302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.141081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.141243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.141269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.141284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.141296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.141324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.151142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.151350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.151375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.151390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.151402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.151430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.161145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.161314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.161340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.161354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.161367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.161400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.171167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.171339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.171364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.171379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.171391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.171418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.181257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.181420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.181446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.181461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.181473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.181501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.191277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.191448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.191475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.191494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.191508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.191537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.201301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.201469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.201496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.201515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.201527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.201556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:27.800 [2024-05-15 02:39:15.211328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.800 [2024-05-15 02:39:15.211513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.800 [2024-05-15 02:39:15.211552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.800 [2024-05-15 02:39:15.211576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.800 [2024-05-15 02:39:15.211589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:27.800 [2024-05-15 02:39:15.211619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.800 qpair failed and we were unable to recover it. 00:22:28.061 [2024-05-15 02:39:15.221363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.061 [2024-05-15 02:39:15.221532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.061 [2024-05-15 02:39:15.221559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.061 [2024-05-15 02:39:15.221574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.061 [2024-05-15 02:39:15.221586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.061 [2024-05-15 02:39:15.221614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.061 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.231348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.231512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.231538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.231553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.231565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.231593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.241397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.241564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.241591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.241605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.241617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.241645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.251396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.251563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.251588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.251603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.251621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.251649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.261455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.261654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.261680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.261695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.261708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.261736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.271496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.271667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.271693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.271707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.271719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.271748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.281523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.281738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.281765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.281779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.281792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.281820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.291572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.291740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.291766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.291781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.291794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.291822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.301536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.301702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.301727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.301742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.301754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.301782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.311602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.311817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.311843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.311858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.311871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.311898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.321650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.321860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.321886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.321900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.321912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.321947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.331633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.331800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.331826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.062 [2024-05-15 02:39:15.331841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.062 [2024-05-15 02:39:15.331853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.062 [2024-05-15 02:39:15.331882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.062 qpair failed and we were unable to recover it. 00:22:28.062 [2024-05-15 02:39:15.341688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.062 [2024-05-15 02:39:15.341912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.062 [2024-05-15 02:39:15.341947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.341963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.341980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.342010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.351733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.351955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.351981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.351996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.352009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.352037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.361720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.361883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.361909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.361923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.361944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.361973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.371734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.371899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.371926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.371949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.371962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.371990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.381772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.381944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.381972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.381988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.382000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.382029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.391813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.392000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.392027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.392042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.392055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.392083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.401848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.402045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.402071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.402085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.402098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.402127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.411850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.412021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.412048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.412064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.412076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.412105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.421878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.422041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.422075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.422090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.422102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.422130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.431940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.432118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.432144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.432164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.432177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.432205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.441938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.442113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.442139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.442154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.442166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.442195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.451999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.063 [2024-05-15 02:39:15.452173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.063 [2024-05-15 02:39:15.452199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.063 [2024-05-15 02:39:15.452214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.063 [2024-05-15 02:39:15.452226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.063 [2024-05-15 02:39:15.452253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.063 qpair failed and we were unable to recover it. 00:22:28.063 [2024-05-15 02:39:15.461995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.064 [2024-05-15 02:39:15.462153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.064 [2024-05-15 02:39:15.462178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.064 [2024-05-15 02:39:15.462192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.064 [2024-05-15 02:39:15.462204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.064 [2024-05-15 02:39:15.462233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.064 qpair failed and we were unable to recover it. 00:22:28.064 [2024-05-15 02:39:15.472041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.064 [2024-05-15 02:39:15.472206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.064 [2024-05-15 02:39:15.472232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.064 [2024-05-15 02:39:15.472247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.064 [2024-05-15 02:39:15.472259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.064 [2024-05-15 02:39:15.472287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.064 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.482075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.482247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.482273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.482287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.482299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.482327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.492096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.492267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.492294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.492308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.492320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.492348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.502129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.502294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.502319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.502333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.502345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.502373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.512163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.512380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.512405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.512420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.512432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.512461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.522164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.522331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.522356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.522380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.522394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.522422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.532211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.532370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.532396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.532411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.532423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.532451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.542223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.542382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.542408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.542423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.542435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.542463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.552268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.325 [2024-05-15 02:39:15.552487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.325 [2024-05-15 02:39:15.552512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.325 [2024-05-15 02:39:15.552527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.325 [2024-05-15 02:39:15.552540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.325 [2024-05-15 02:39:15.552567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.325 qpair failed and we were unable to recover it. 00:22:28.325 [2024-05-15 02:39:15.562300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.562477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.562502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.562516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.562529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.562556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.572409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.572589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.572614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.572630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.572642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.572670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.582368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.582535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.582561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.582576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.582591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.582620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.592412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.592583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.592609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.592624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.592637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.592665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.602400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.602568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.602603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.602618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.602630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.602658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.612477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.612652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.612678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.612699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.612712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.612740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.622568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.622759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.622785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.622799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.622812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.622841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.632542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.632749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.632775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.632790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.632802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.632831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.642523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.642683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.642709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.642724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.642736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.642765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.652591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.652758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.652783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.652798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.652810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.652838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.662578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.662755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.662782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.662797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.662809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.662838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.672637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.672807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.672834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.672849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.672861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.672888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.682667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.682831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.682858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.682873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.682885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.682913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.326 qpair failed and we were unable to recover it. 00:22:28.326 [2024-05-15 02:39:15.692666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.326 [2024-05-15 02:39:15.692836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.326 [2024-05-15 02:39:15.692862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.326 [2024-05-15 02:39:15.692877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.326 [2024-05-15 02:39:15.692889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.326 [2024-05-15 02:39:15.692917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.327 qpair failed and we were unable to recover it. 00:22:28.327 [2024-05-15 02:39:15.702699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.327 [2024-05-15 02:39:15.702876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.327 [2024-05-15 02:39:15.702907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.327 [2024-05-15 02:39:15.702923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.327 [2024-05-15 02:39:15.702942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.327 [2024-05-15 02:39:15.702972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.327 qpair failed and we were unable to recover it. 00:22:28.327 [2024-05-15 02:39:15.712776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.327 [2024-05-15 02:39:15.712969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.327 [2024-05-15 02:39:15.712996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.327 [2024-05-15 02:39:15.713010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.327 [2024-05-15 02:39:15.713022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.327 [2024-05-15 02:39:15.713050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.327 qpair failed and we were unable to recover it. 00:22:28.327 [2024-05-15 02:39:15.722782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.327 [2024-05-15 02:39:15.722953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.327 [2024-05-15 02:39:15.722978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.327 [2024-05-15 02:39:15.722993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.327 [2024-05-15 02:39:15.723006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.327 [2024-05-15 02:39:15.723034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.327 qpair failed and we were unable to recover it. 00:22:28.327 [2024-05-15 02:39:15.732816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.327 [2024-05-15 02:39:15.732994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.327 [2024-05-15 02:39:15.733025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.327 [2024-05-15 02:39:15.733042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.327 [2024-05-15 02:39:15.733054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.327 [2024-05-15 02:39:15.733084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.327 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.742806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.742981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.743008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.743023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.743036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.743069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.752850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.753030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.753056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.753072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.753084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.753113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.762865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.763036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.763061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.763076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.763088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.763117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.772946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.773111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.773137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.773152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.773164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.773192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.782971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.783161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.783187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.783202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.783214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.783243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.793005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.793178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.793210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.793226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.793238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.793267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.586 [2024-05-15 02:39:15.803020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.586 [2024-05-15 02:39:15.803188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.586 [2024-05-15 02:39:15.803214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.586 [2024-05-15 02:39:15.803229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.586 [2024-05-15 02:39:15.803242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.586 [2024-05-15 02:39:15.803270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.586 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.813027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.813195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.813221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.813236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.813249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.813276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.823111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.823337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.823364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.823378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.823391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.823419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.833106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.833278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.833304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.833322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.833334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.833368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.843144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.843309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.843336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.843350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.843362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.843391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.853143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.853310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.853335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.853349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.853362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.853389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.863204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.863364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.863390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.863405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.863417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.863445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.873237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.873408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.873434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.873449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.873461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.873489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.883235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.883401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.883433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.883448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.883461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.883489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.893244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.893427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.893453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.893468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.893481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.893509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.903277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.903442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.903468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.903482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.903494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.903522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.913295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.913466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.913491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.913506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.913518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.913545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.923323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.923488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.923514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.923529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.923541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.923575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.933350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.933518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.933544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.933559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.933571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.587 [2024-05-15 02:39:15.933599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.587 qpair failed and we were unable to recover it. 00:22:28.587 [2024-05-15 02:39:15.943401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.587 [2024-05-15 02:39:15.943568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.587 [2024-05-15 02:39:15.943594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.587 [2024-05-15 02:39:15.943609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.587 [2024-05-15 02:39:15.943622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.588 [2024-05-15 02:39:15.943650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.588 qpair failed and we were unable to recover it. 00:22:28.588 [2024-05-15 02:39:15.953479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.588 [2024-05-15 02:39:15.953679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.588 [2024-05-15 02:39:15.953707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.588 [2024-05-15 02:39:15.953723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.588 [2024-05-15 02:39:15.953735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.588 [2024-05-15 02:39:15.953765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.588 qpair failed and we were unable to recover it. 00:22:28.588 [2024-05-15 02:39:15.963441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.588 [2024-05-15 02:39:15.963610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.588 [2024-05-15 02:39:15.963636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.588 [2024-05-15 02:39:15.963651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.588 [2024-05-15 02:39:15.963663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.588 [2024-05-15 02:39:15.963692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.588 qpair failed and we were unable to recover it. 00:22:28.588 [2024-05-15 02:39:15.973482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.588 [2024-05-15 02:39:15.973658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.588 [2024-05-15 02:39:15.973690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.588 [2024-05-15 02:39:15.973705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.588 [2024-05-15 02:39:15.973718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.588 [2024-05-15 02:39:15.973745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.588 qpair failed and we were unable to recover it. 00:22:28.588 [2024-05-15 02:39:15.983507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.588 [2024-05-15 02:39:15.983680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.588 [2024-05-15 02:39:15.983706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.588 [2024-05-15 02:39:15.983721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.588 [2024-05-15 02:39:15.983733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.588 [2024-05-15 02:39:15.983761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.588 qpair failed and we were unable to recover it. 00:22:28.588 [2024-05-15 02:39:15.993568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.588 [2024-05-15 02:39:15.993750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.588 [2024-05-15 02:39:15.993777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.588 [2024-05-15 02:39:15.993792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.588 [2024-05-15 02:39:15.993804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.588 [2024-05-15 02:39:15.993833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.588 qpair failed and we were unable to recover it. 00:22:28.848 [2024-05-15 02:39:16.003550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.848 [2024-05-15 02:39:16.003735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.848 [2024-05-15 02:39:16.003768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.848 [2024-05-15 02:39:16.003787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.848 [2024-05-15 02:39:16.003799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.848 [2024-05-15 02:39:16.003829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.848 qpair failed and we were unable to recover it. 00:22:28.848 [2024-05-15 02:39:16.013580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.848 [2024-05-15 02:39:16.013751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.848 [2024-05-15 02:39:16.013778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.848 [2024-05-15 02:39:16.013794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.848 [2024-05-15 02:39:16.013812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.848 [2024-05-15 02:39:16.013842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.848 qpair failed and we were unable to recover it. 00:22:28.848 [2024-05-15 02:39:16.023602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.848 [2024-05-15 02:39:16.023761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.848 [2024-05-15 02:39:16.023787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.023802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.023814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.023842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.033659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.033827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.033853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.033868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.033880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.033908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.043681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.043847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.043874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.043888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.043901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.043939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.053684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.053858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.053884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.053899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.053912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.053946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.063709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.063901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.063927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.063950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.063963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.063992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.073809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.073986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.074012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.074027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.074039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.074070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.083793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.083959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.083987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.084002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.084015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.084044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.093836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.094052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.094078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.094093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.094106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.094134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.103863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.104059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.104085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.104100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.104134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.104164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.113886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.114090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.114117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.114132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.114144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.114171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.123893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.124112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.124137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.124152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.124164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.124192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.133940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.134106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.134132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.134148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.134160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.134188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.849 [2024-05-15 02:39:16.144004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.849 [2024-05-15 02:39:16.144220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.849 [2024-05-15 02:39:16.144246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.849 [2024-05-15 02:39:16.144261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.849 [2024-05-15 02:39:16.144273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.849 [2024-05-15 02:39:16.144302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.849 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.153986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.154157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.154183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.154198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.154210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.154238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.164015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.164180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.164205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.164220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.164232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.164260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.174078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.174242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.174269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.174283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.174295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.174323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.184059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.184248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.184273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.184288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.184300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.184328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.194136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.194311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.194337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.194357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.194370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.194398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.204128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.204299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.204324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.204339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.204351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.204379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.214143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.214309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.214334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.214349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.214361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.214389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.224168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.224332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.224357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.224371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.224384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.224412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.234207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.234378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.234403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.234418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.234430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:28.850 [2024-05-15 02:39:16.234458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.244320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.244546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.244579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.244597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.244612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:28.850 [2024-05-15 02:39:16.244644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:28.850 qpair failed and we were unable to recover it. 00:22:28.850 [2024-05-15 02:39:16.254287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.850 [2024-05-15 02:39:16.254451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.850 [2024-05-15 02:39:16.254478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.850 [2024-05-15 02:39:16.254494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.850 [2024-05-15 02:39:16.254506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:28.850 [2024-05-15 02:39:16.254536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:28.850 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.264335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.264522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.264549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.264564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.264577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.264607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.274362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.274556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.274584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.274599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.274611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.274641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.284445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.284620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.284647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.284668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.284681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.284711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.294497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.294660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.294686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.294701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.294714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.294744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.304414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.304575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.304601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.304616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.304629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.304658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.314458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.314630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.314655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.314670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.314682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.314712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.324487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.324650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.324675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.324690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.324702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.324732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.334495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.334661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.334688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.334702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.334715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.334745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.344519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.344688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.344714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.344729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.344741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.344771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.354563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.354737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.354764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.354779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.354791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.354820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.364595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.364765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.364791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.364806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.364819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.364849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.110 qpair failed and we were unable to recover it. 00:22:29.110 [2024-05-15 02:39:16.374648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.110 [2024-05-15 02:39:16.374810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.110 [2024-05-15 02:39:16.374841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.110 [2024-05-15 02:39:16.374857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.110 [2024-05-15 02:39:16.374869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.110 [2024-05-15 02:39:16.374899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.384629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.384796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.384822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.384837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.384850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.384879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.394670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.394840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.394865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.394880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.394892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.394922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.404716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.404886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.404912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.404927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.404948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.404978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.414714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.414960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.414986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.415001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.415013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.415048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.424771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.424980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.425006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.425021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.425034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.425063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.434840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.435068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.435098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.435113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.435126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.435157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.444824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.445005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.445032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.445047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.445059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.445090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.454835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.455017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.455043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.455057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.455070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.455099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.464901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.465080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.465112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.465128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.465140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.465171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.474917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.475118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.475145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.475160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.475172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.475202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.484952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.485115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.485141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.485155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.485168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.485197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.494959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.495163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.495189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.495204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.495216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.495245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.504990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.505156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.111 [2024-05-15 02:39:16.505182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.111 [2024-05-15 02:39:16.505197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.111 [2024-05-15 02:39:16.505209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.111 [2024-05-15 02:39:16.505244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.111 qpair failed and we were unable to recover it. 00:22:29.111 [2024-05-15 02:39:16.515016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.111 [2024-05-15 02:39:16.515189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.112 [2024-05-15 02:39:16.515215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.112 [2024-05-15 02:39:16.515229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.112 [2024-05-15 02:39:16.515242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.112 [2024-05-15 02:39:16.515272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.112 qpair failed and we were unable to recover it. 00:22:29.370 [2024-05-15 02:39:16.525062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.370 [2024-05-15 02:39:16.525284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.370 [2024-05-15 02:39:16.525310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.370 [2024-05-15 02:39:16.525325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.370 [2024-05-15 02:39:16.525337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.370 [2024-05-15 02:39:16.525367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.370 qpair failed and we were unable to recover it. 00:22:29.370 [2024-05-15 02:39:16.535080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.370 [2024-05-15 02:39:16.535292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.370 [2024-05-15 02:39:16.535318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.370 [2024-05-15 02:39:16.535333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.370 [2024-05-15 02:39:16.535345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.370 [2024-05-15 02:39:16.535375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.370 qpair failed and we were unable to recover it. 00:22:29.370 [2024-05-15 02:39:16.545148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.370 [2024-05-15 02:39:16.545331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.370 [2024-05-15 02:39:16.545360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.370 [2024-05-15 02:39:16.545378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.370 [2024-05-15 02:39:16.545390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.370 [2024-05-15 02:39:16.545421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.370 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.555143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.555327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.555354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.555369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.555381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.555411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.565143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.565311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.565337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.565352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.565364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.565395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.575199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.575359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.575386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.575400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.575412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.575442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.585205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.585367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.585393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.585408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.585420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.585450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.595270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.595483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.595509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.595524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.595542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.595573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.605272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.605474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.605502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.605520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.605533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.605563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.615419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.615603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.615630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.615645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.615657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.615687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.625331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.625491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.625517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.625532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.625544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.625573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.635365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.635528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.635553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.635568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.635580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.635610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.645474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.645650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.645676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.645691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.645703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.645734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.655698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.655868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.655892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.655906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.655919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.655958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.665447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.665610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.665636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.665651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.665663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.665693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.675505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.675710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.675736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.675751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.371 [2024-05-15 02:39:16.675763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.371 [2024-05-15 02:39:16.675793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.371 qpair failed and we were unable to recover it. 00:22:29.371 [2024-05-15 02:39:16.685494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.371 [2024-05-15 02:39:16.685668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.371 [2024-05-15 02:39:16.685693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.371 [2024-05-15 02:39:16.685714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.685727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.372 [2024-05-15 02:39:16.685757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.695534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.695695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.695721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.695736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.695748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.372 [2024-05-15 02:39:16.695778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.705544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.705710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.705736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.705751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.705764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.372 [2024-05-15 02:39:16.705794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.715636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.715836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.715862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.715877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.715889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.372 [2024-05-15 02:39:16.715937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.725687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.725849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.725876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.725891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.725903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.372 [2024-05-15 02:39:16.725939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.735645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.735807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.735834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.735849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.735861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:29.372 [2024-05-15 02:39:16.735904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.745698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.745867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.745900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.745917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.745938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.372 [2024-05-15 02:39:16.745972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.755757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.755923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.755959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.755975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.755988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.372 [2024-05-15 02:39:16.756031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.765755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.765917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.765952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.765969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.765982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.372 [2024-05-15 02:39:16.766024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.372 [2024-05-15 02:39:16.775787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.372 [2024-05-15 02:39:16.775972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.372 [2024-05-15 02:39:16.776005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.372 [2024-05-15 02:39:16.776021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.372 [2024-05-15 02:39:16.776034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.372 [2024-05-15 02:39:16.776064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.372 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.785807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.786012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.786039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.786054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.786066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.786097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.795863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.796079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.796106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.796121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.796133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.796163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.805869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.806061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.806087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.806102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.806115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.806145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.815894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.816095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.816121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.816136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.816148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.816179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.825898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.826070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.826097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.826112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.826124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.826154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.835967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.836142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.836168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.836183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.836196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.836225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.845959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.846168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.846195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.846209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.846222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.846251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.856028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.856194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.856223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.856241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.856254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.856285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.866006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.631 [2024-05-15 02:39:16.866168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.631 [2024-05-15 02:39:16.866200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.631 [2024-05-15 02:39:16.866217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.631 [2024-05-15 02:39:16.866229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.631 [2024-05-15 02:39:16.866259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.631 qpair failed and we were unable to recover it. 00:22:29.631 [2024-05-15 02:39:16.876045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.876212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.876237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.876252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.876264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.876294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.886072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.886250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.886276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.886291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.886304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.886333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.896097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.896274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.896299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.896314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.896327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.896357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.906136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.906316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.906343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.906358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.906370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.906406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.916169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.916344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.916371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.916386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.916401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.916431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.926211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.926382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.926408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.926423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.926436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.926466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.936219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.936389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.936415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.936430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.936442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.936472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.946225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.946390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.946417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.946432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.946444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.946474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.956374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.956579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.956611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.956628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.956640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.956682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.966305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.966468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.966494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.966509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.966522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.966551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.976336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.976505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.976531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.976546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.976558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.976600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.986419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.986594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.986621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.986635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.986648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.986677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:16.996442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:16.996611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:16.996636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:16.996651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:16.996669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:16.996699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.632 [2024-05-15 02:39:17.006462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.632 [2024-05-15 02:39:17.006647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.632 [2024-05-15 02:39:17.006673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.632 [2024-05-15 02:39:17.006687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.632 [2024-05-15 02:39:17.006700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.632 [2024-05-15 02:39:17.006729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.632 qpair failed and we were unable to recover it. 00:22:29.633 [2024-05-15 02:39:17.016497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.633 [2024-05-15 02:39:17.016659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.633 [2024-05-15 02:39:17.016685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.633 [2024-05-15 02:39:17.016701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.633 [2024-05-15 02:39:17.016713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.633 [2024-05-15 02:39:17.016742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.633 qpair failed and we were unable to recover it. 00:22:29.633 [2024-05-15 02:39:17.026494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.633 [2024-05-15 02:39:17.026703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.633 [2024-05-15 02:39:17.026729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.633 [2024-05-15 02:39:17.026743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.633 [2024-05-15 02:39:17.026755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.633 [2024-05-15 02:39:17.026785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.633 qpair failed and we were unable to recover it. 00:22:29.633 [2024-05-15 02:39:17.036516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.633 [2024-05-15 02:39:17.036680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.633 [2024-05-15 02:39:17.036706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.633 [2024-05-15 02:39:17.036721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.633 [2024-05-15 02:39:17.036734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.633 [2024-05-15 02:39:17.036775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.633 qpair failed and we were unable to recover it. 00:22:29.893 [2024-05-15 02:39:17.046532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.893 [2024-05-15 02:39:17.046700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.893 [2024-05-15 02:39:17.046726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.893 [2024-05-15 02:39:17.046741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.893 [2024-05-15 02:39:17.046754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.046783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.056579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.056773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.056799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.056814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.056826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.056855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.066602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.066773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.066800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.066819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.066832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.066862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.076666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.076842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.076869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.076884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.076897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.076927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.086709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.086873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.086900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.086921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.086941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.086973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.096738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.096902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.096928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.096953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.096965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.096996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.106709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.106868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.106894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.106909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.106921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.106961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.116760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.116925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.116958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.116974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.116986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.117016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.126801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.126988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.127014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.127029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.127041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.127070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.136795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.136987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.137013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.137027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.137040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.137070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.146840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.147008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.147035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.147050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.147062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.147092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.156858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.157031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.157057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.157071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.157084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.157114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.166886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.167055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.167082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.167097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.167109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.167139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.176928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.177103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.177128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.894 [2024-05-15 02:39:17.177149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.894 [2024-05-15 02:39:17.177162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.894 [2024-05-15 02:39:17.177192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.894 qpair failed and we were unable to recover it. 00:22:29.894 [2024-05-15 02:39:17.187003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.894 [2024-05-15 02:39:17.187210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.894 [2024-05-15 02:39:17.187238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.187253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.187266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.187296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.196996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.197193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.197225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.197239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.197251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.197281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.206994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.207157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.207183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.207198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.207210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.207240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.217042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.217206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.217232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.217247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.217259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.217288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.227090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.227262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.227288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.227303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.227315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.227345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.237091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.237257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.237284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.237299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.237311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.237341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.247111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.247282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.247308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.247322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.247335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.247365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.257167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.257370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.257396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.257411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.257423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.257452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.267217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.267389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.267421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.267436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.267449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.267478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.277211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.277377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.277402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.277417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.277430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.277458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.287288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.287500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.287528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.287543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.287559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.287590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:29.895 [2024-05-15 02:39:17.297270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.895 [2024-05-15 02:39:17.297429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.895 [2024-05-15 02:39:17.297457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.895 [2024-05-15 02:39:17.297471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.895 [2024-05-15 02:39:17.297484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:29.895 [2024-05-15 02:39:17.297513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.895 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.307315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.307485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.307511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.307526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.307539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.307578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.317321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.317490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.317517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.317532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.317544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.317574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.327343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.327508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.327534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.327550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.327562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.327592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.337366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.337524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.337550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.337565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.337577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.337606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.347400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.347565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.347591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.347606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.347618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.347648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.357464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.357674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.357706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.357722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.357734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.357764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.367486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.367656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.367683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.367698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.367713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.367744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.377512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.377676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.377702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.377717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.377729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.377759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.387527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.387689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.387715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.387730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.387743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.387773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.397598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.397784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.397811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.397826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.397844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.397874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.156 [2024-05-15 02:39:17.407577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.156 [2024-05-15 02:39:17.407741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.156 [2024-05-15 02:39:17.407767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.156 [2024-05-15 02:39:17.407782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.156 [2024-05-15 02:39:17.407795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.156 [2024-05-15 02:39:17.407824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.156 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.417607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.417778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.417804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.417819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.417831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.417861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.427660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.427828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.427854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.427869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.427881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.427911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.437704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.437885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.437910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.437925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.437949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.437980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.447736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.447919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.447955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.447971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.447983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.448012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.457722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.457887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.457913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.457928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.457949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.457992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.467741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.467903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.467934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.467951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.467963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.467993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.477787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.477960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.477986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.478002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.478014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.478044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.487791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.487958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.487985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.488005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.488018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.488047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.497850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.498021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.498047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.498062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.498074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.498104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.507858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.508026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.508053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.508068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.508080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.508109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.517898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.518078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.518106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.518123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.518137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.518168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.527903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.528087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.528113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.528128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.528141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.528170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.537974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.538142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.538170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.538188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.157 [2024-05-15 02:39:17.538201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.157 [2024-05-15 02:39:17.538232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.157 qpair failed and we were unable to recover it. 00:22:30.157 [2024-05-15 02:39:17.547999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.157 [2024-05-15 02:39:17.548204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.157 [2024-05-15 02:39:17.548231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.157 [2024-05-15 02:39:17.548247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.158 [2024-05-15 02:39:17.548259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.158 [2024-05-15 02:39:17.548289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.158 qpair failed and we were unable to recover it. 00:22:30.158 [2024-05-15 02:39:17.558073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.158 [2024-05-15 02:39:17.558260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.158 [2024-05-15 02:39:17.558286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.158 [2024-05-15 02:39:17.558300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.158 [2024-05-15 02:39:17.558312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.158 [2024-05-15 02:39:17.558342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.158 qpair failed and we were unable to recover it. 00:22:30.158 [2024-05-15 02:39:17.568032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.158 [2024-05-15 02:39:17.568198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.158 [2024-05-15 02:39:17.568224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.158 [2024-05-15 02:39:17.568239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.158 [2024-05-15 02:39:17.568251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.158 [2024-05-15 02:39:17.568281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.158 qpair failed and we were unable to recover it. 00:22:30.418 [2024-05-15 02:39:17.578048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.418 [2024-05-15 02:39:17.578215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.418 [2024-05-15 02:39:17.578242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.418 [2024-05-15 02:39:17.578263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.418 [2024-05-15 02:39:17.578276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.418 [2024-05-15 02:39:17.578306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.418 qpair failed and we were unable to recover it. 00:22:30.418 [2024-05-15 02:39:17.588085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.418 [2024-05-15 02:39:17.588244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.588271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.588286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.588298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.588327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.598171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.598341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.598366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.598381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.598394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.598423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.608146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.608316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.608342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.608357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.608370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.608399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.618170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.618336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.618362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.618377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.618389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.618418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.628202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.628371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.628397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.628412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.628424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.628453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.638253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.638420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.638447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.638461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.638474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.638516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.648307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.648523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.648549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.648565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.648577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.648607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.658331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.658544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.658569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.658583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.658595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.658625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.668320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.668483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.668513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.668529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.668541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.668571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.678385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.678552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.678578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.678593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.678605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.678634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.688398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.688557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.688584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.688599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.688611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.688641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.698445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.698640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.698666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.698681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.698693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.698735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.708460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.708626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.708653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.708668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.708681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.708717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.718501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.718675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.718701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.718716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.718728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.718757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.728497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.728661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.728688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.728702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.728715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.728744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.738515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.738681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.738707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.738722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.738734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.738763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.748561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.748726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.748752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.748767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.748780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.748809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.758590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.758757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.758789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.758804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.758817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.758846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.768617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.768819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.768846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.768860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.768873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.768902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.778641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.778822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.778848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.778863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.778875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.778904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.788659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.788812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.788838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.788853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.788865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.788894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.798705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.419 [2024-05-15 02:39:17.798874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.419 [2024-05-15 02:39:17.798900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.419 [2024-05-15 02:39:17.798914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.419 [2024-05-15 02:39:17.798940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.419 [2024-05-15 02:39:17.798972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.419 qpair failed and we were unable to recover it. 00:22:30.419 [2024-05-15 02:39:17.808754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.420 [2024-05-15 02:39:17.808926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.420 [2024-05-15 02:39:17.808959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.420 [2024-05-15 02:39:17.808975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.420 [2024-05-15 02:39:17.808987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.420 [2024-05-15 02:39:17.809018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.420 qpair failed and we were unable to recover it. 00:22:30.420 [2024-05-15 02:39:17.818795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.420 [2024-05-15 02:39:17.818969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.420 [2024-05-15 02:39:17.818995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.420 [2024-05-15 02:39:17.819011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.420 [2024-05-15 02:39:17.819023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.420 [2024-05-15 02:39:17.819054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.420 qpair failed and we were unable to recover it. 00:22:30.420 [2024-05-15 02:39:17.828799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.420 [2024-05-15 02:39:17.828987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.420 [2024-05-15 02:39:17.829013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.420 [2024-05-15 02:39:17.829027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.420 [2024-05-15 02:39:17.829040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.420 [2024-05-15 02:39:17.829070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.420 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.838802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.838981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.839007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.839022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.839034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.839064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.848857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.849063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.849090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.849105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.849117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.849159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.858864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.859032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.859056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.859070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.859083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.859113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.868884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.869055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.869081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.869096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.869108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.869138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.878922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.879097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.879122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.879138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.879150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.879179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.889019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.889186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.889212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.889227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.889245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.889276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.898986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.681 [2024-05-15 02:39:17.899148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.681 [2024-05-15 02:39:17.899175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.681 [2024-05-15 02:39:17.899190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.681 [2024-05-15 02:39:17.899203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.681 [2024-05-15 02:39:17.899246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.681 qpair failed and we were unable to recover it. 00:22:30.681 [2024-05-15 02:39:17.909087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.909251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.909278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.909292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.909305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.909334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.919062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.919238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.919265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.919284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.919298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.919329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.929083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.929257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.929284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.929300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.929315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.929346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.939084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.939245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.939272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.939286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.939299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.939328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.949202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.949363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.949389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.949403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.949416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.949445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.959154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.959336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.959362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.959376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.959389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.959419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.969260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.969469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.969495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.969510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.969522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.969564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.979251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.979445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.979472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.979492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.979506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.979535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.989318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.989509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.989536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.989551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.989563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.989593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:17.999315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:17.999494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:17.999520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:17.999535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:17.999547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:17.999591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:18.009316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:18.009487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:18.009513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:18.009528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:18.009541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:18.009571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:18.019311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:18.019474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:18.019501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:18.019516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:18.019528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:18.019557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:18.029400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:18.029566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:18.029593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:18.029609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:18.029621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:18.029651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.682 [2024-05-15 02:39:18.039441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.682 [2024-05-15 02:39:18.039632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.682 [2024-05-15 02:39:18.039658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.682 [2024-05-15 02:39:18.039673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.682 [2024-05-15 02:39:18.039685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.682 [2024-05-15 02:39:18.039715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.682 qpair failed and we were unable to recover it. 00:22:30.683 [2024-05-15 02:39:18.049483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.683 [2024-05-15 02:39:18.049654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.683 [2024-05-15 02:39:18.049679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.683 [2024-05-15 02:39:18.049694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.683 [2024-05-15 02:39:18.049707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.683 [2024-05-15 02:39:18.049736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.683 qpair failed and we were unable to recover it. 00:22:30.683 [2024-05-15 02:39:18.059409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.683 [2024-05-15 02:39:18.059568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.683 [2024-05-15 02:39:18.059594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.683 [2024-05-15 02:39:18.059609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.683 [2024-05-15 02:39:18.059622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.683 [2024-05-15 02:39:18.059652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.683 qpair failed and we were unable to recover it. 00:22:30.683 [2024-05-15 02:39:18.069471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.683 [2024-05-15 02:39:18.069640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.683 [2024-05-15 02:39:18.069674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.683 [2024-05-15 02:39:18.069690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.683 [2024-05-15 02:39:18.069703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.683 [2024-05-15 02:39:18.069733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.683 qpair failed and we were unable to recover it. 00:22:30.683 [2024-05-15 02:39:18.079479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.683 [2024-05-15 02:39:18.079649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.683 [2024-05-15 02:39:18.079674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.683 [2024-05-15 02:39:18.079689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.683 [2024-05-15 02:39:18.079702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.683 [2024-05-15 02:39:18.079732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.683 qpair failed and we were unable to recover it. 00:22:30.683 [2024-05-15 02:39:18.089531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.683 [2024-05-15 02:39:18.089735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.683 [2024-05-15 02:39:18.089761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.683 [2024-05-15 02:39:18.089776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.683 [2024-05-15 02:39:18.089788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.683 [2024-05-15 02:39:18.089818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.683 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.099565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.099737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.099763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.099778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.099790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.099820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.109587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.109746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.109773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.109788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.109801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.109836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.119640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.119808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.119833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.119847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.119860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.119890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.129639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.129816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.129843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.129858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.129870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.129899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.139666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.139832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.139859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.139877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.139889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.139919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.149681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.149842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.149868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.149883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.149896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.149926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.159713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.159898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.159936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.159954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.159967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.159998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.169820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.169999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.170026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.170041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.170053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.170083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.179801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.179971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.179998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.180013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.180026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.180056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.189786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.189952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.189978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.189993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.190005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.190035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.199836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.200008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.200035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.200049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.200062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.200097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.209887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.210095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.210123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.210139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.210152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.943 [2024-05-15 02:39:18.210182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.943 qpair failed and we were unable to recover it. 00:22:30.943 [2024-05-15 02:39:18.219890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.943 [2024-05-15 02:39:18.220070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.943 [2024-05-15 02:39:18.220097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.943 [2024-05-15 02:39:18.220112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.943 [2024-05-15 02:39:18.220124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.944 [2024-05-15 02:39:18.220154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.229917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.230086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.230112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.230128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.230140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.944 [2024-05-15 02:39:18.230170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.239963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.240172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.240198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.240213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.240225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.944 [2024-05-15 02:39:18.240255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.250010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.250234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.250260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.250275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.250288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.944 [2024-05-15 02:39:18.250318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.260009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.260169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.260195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.260210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.260222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b48000b90 00:22:30.944 [2024-05-15 02:39:18.260253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.270023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.270188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.270219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.270235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.270248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.270279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.280066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.280250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.280277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.280292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.280304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.280335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.290074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.290249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.290277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.290292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.290322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.290353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.300106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.300270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.300297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.300312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.300324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.300354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.310141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.310311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.310338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.310353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.310365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.310395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.320197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.320385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.320411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.320429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.320442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.320472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.330250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.330419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.330445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.330460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.330472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.330503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.340207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.340370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.340396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.340410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.340422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.340452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:30.944 [2024-05-15 02:39:18.350254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.944 [2024-05-15 02:39:18.350413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.944 [2024-05-15 02:39:18.350440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.944 [2024-05-15 02:39:18.350455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.944 [2024-05-15 02:39:18.350468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:30.944 [2024-05-15 02:39:18.350498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.944 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.360289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.360469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.360495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.360510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.360532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.360562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.370358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.370549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.370575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.370590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.370602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.370632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.380445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.380611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.380637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.380657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.380670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.380699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.390417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.390617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.390643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.390657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.390670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.390700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.400437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.400629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.400655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.400669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.400682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.400711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.410423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.410589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.410614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.410629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.410641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.410671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.420474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.420645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.420671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.420686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.420698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.420728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.430487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.430676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.430702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.430717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.430729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.430758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.440543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.440707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.204 [2024-05-15 02:39:18.440733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.204 [2024-05-15 02:39:18.440748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.204 [2024-05-15 02:39:18.440760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.204 [2024-05-15 02:39:18.440802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.204 qpair failed and we were unable to recover it. 00:22:31.204 [2024-05-15 02:39:18.450559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.204 [2024-05-15 02:39:18.450726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.450752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.450767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.450779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.450821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.460570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.460738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.460765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.460779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.460792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.460821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.470613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.470776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.470807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.470822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.470835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.470864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.480655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.480841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.480867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.480882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.480895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.480924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.490682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.490855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.490881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.490895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.490907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.490945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.500675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.500837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.500863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.500878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.500890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.500920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.510712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.510891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.510918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.510940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.510954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.510983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.520753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.520916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.520950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.520966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.520979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.521009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.530761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.530919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.530952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.530968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.530980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.531021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.540795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.540965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.540991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.541006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.541018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.541047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.550845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.551017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.551043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.551058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.551070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.551100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.560912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.561127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.561159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.561174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.561186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.561216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.570909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.571115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.571142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.571157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.571169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.571212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.580936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.205 [2024-05-15 02:39:18.581105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.205 [2024-05-15 02:39:18.581131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.205 [2024-05-15 02:39:18.581145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.205 [2024-05-15 02:39:18.581158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.205 [2024-05-15 02:39:18.581187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.205 qpair failed and we were unable to recover it. 00:22:31.205 [2024-05-15 02:39:18.590924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.206 [2024-05-15 02:39:18.591095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.206 [2024-05-15 02:39:18.591121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.206 [2024-05-15 02:39:18.591135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.206 [2024-05-15 02:39:18.591148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.206 [2024-05-15 02:39:18.591178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.206 qpair failed and we were unable to recover it. 00:22:31.206 [2024-05-15 02:39:18.600983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.206 [2024-05-15 02:39:18.601182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.206 [2024-05-15 02:39:18.601208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.206 [2024-05-15 02:39:18.601223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.206 [2024-05-15 02:39:18.601235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.206 [2024-05-15 02:39:18.601270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.206 qpair failed and we were unable to recover it. 00:22:31.206 [2024-05-15 02:39:18.611041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.206 [2024-05-15 02:39:18.611215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.206 [2024-05-15 02:39:18.611240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.206 [2024-05-15 02:39:18.611255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.206 [2024-05-15 02:39:18.611267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.206 [2024-05-15 02:39:18.611297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.206 qpair failed and we were unable to recover it. 00:22:31.465 [2024-05-15 02:39:18.621016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.465 [2024-05-15 02:39:18.621177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.465 [2024-05-15 02:39:18.621202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.465 [2024-05-15 02:39:18.621221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.465 [2024-05-15 02:39:18.621234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.465 [2024-05-15 02:39:18.621263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.465 qpair failed and we were unable to recover it. 00:22:31.465 [2024-05-15 02:39:18.631073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.465 [2024-05-15 02:39:18.631256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.465 [2024-05-15 02:39:18.631281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.465 [2024-05-15 02:39:18.631296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.465 [2024-05-15 02:39:18.631309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.465 [2024-05-15 02:39:18.631338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.465 qpair failed and we were unable to recover it. 00:22:31.465 [2024-05-15 02:39:18.641095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.465 [2024-05-15 02:39:18.641274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.465 [2024-05-15 02:39:18.641299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.465 [2024-05-15 02:39:18.641314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.465 [2024-05-15 02:39:18.641326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.465 [2024-05-15 02:39:18.641355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.465 qpair failed and we were unable to recover it. 00:22:31.465 [2024-05-15 02:39:18.651105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.465 [2024-05-15 02:39:18.651267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.651331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.651347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.651360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.651403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.661129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.661294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.661318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.661332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.661344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.661374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.671156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.671322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.671348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.671363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.671376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.671406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.681189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.681359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.681385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.681399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.681412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.681442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.691282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.691487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.691513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.691529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.691547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.691578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.701291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.701454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.701481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.701496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.701509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.701539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.711329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.711495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.711520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.711536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.711548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.711577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.721352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.721527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.721553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.721568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.721580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.721610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.731424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.731599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.731625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.731640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.731652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.731682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.741453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.741634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.741660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.741675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.741687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.741717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.751430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.751635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.751662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.751680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.751694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.751725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.761425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.761611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.761637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.761651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.761663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.761694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.771539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.771697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.771723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.771738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.771750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.771780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.781542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.466 [2024-05-15 02:39:18.781724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.466 [2024-05-15 02:39:18.781750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.466 [2024-05-15 02:39:18.781770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.466 [2024-05-15 02:39:18.781784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.466 [2024-05-15 02:39:18.781813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.466 qpair failed and we were unable to recover it. 00:22:31.466 [2024-05-15 02:39:18.791533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.791734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.791760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.791774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.791786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.791816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.801572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.801745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.801770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.801785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.801797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.801827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.811564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.811733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.811759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.811774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.811786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.811815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.821605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.821797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.821822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.821837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.821849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.821879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.831618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.831775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.831801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.831815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.831827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.831856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.841672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.841882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.841907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.841922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.841942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.841973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.851661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.851842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.851867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.851882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.851894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.851923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.861699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.861865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.861892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.861909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.861923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.861964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.467 [2024-05-15 02:39:18.871795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.467 [2024-05-15 02:39:18.871970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.467 [2024-05-15 02:39:18.871998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.467 [2024-05-15 02:39:18.872022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.467 [2024-05-15 02:39:18.872036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.467 [2024-05-15 02:39:18.872067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.467 qpair failed and we were unable to recover it. 00:22:31.725 [2024-05-15 02:39:18.881804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.725 [2024-05-15 02:39:18.881987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.725 [2024-05-15 02:39:18.882014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.725 [2024-05-15 02:39:18.882029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.725 [2024-05-15 02:39:18.882041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.725 [2024-05-15 02:39:18.882071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.725 qpair failed and we were unable to recover it. 00:22:31.725 [2024-05-15 02:39:18.891831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.725 [2024-05-15 02:39:18.892006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.725 [2024-05-15 02:39:18.892033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.725 [2024-05-15 02:39:18.892047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.725 [2024-05-15 02:39:18.892059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.892089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.901812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.901978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.902004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.902019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.902032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.902062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.911853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.912029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.912055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.912070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.912082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.912112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.921914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.922113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.922138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.922153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.922165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.922195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.931946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.932114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.932140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.932154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.932166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.932196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.941956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.942170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.942195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.942209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.942221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.942251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.951970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.952137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.952162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.952177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.952189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2b50000b90 00:22:31.726 [2024-05-15 02:39:18.952219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.962101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.962272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.962313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.962330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.962343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:31.726 [2024-05-15 02:39:18.962372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.972058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:31.726 [2024-05-15 02:39:18.972223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:31.726 [2024-05-15 02:39:18.972249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:31.726 [2024-05-15 02:39:18.972265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:31.726 [2024-05-15 02:39:18.972277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x224e420 00:22:31.726 [2024-05-15 02:39:18.972305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.726 qpair failed and we were unable to recover it. 00:22:31.726 [2024-05-15 02:39:18.972414] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:22:31.726 A controller has encountered a failure and is being reset. 00:22:31.726 [2024-05-15 02:39:18.972473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b0b0 (9): Bad file descriptor 00:22:31.726 Controller properly reset. 00:22:31.726 Initializing NVMe Controllers 00:22:31.726 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:31.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:31.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:31.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:31.726 Initialization complete. Launching workers. 00:22:31.726 Starting thread on core 1 00:22:31.726 Starting thread on core 2 00:22:31.726 Starting thread on core 3 00:22:31.726 Starting thread on core 0 00:22:31.726 02:39:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:22:31.726 00:22:31.726 real 0m11.642s 00:22:31.726 user 0m19.478s 00:22:31.726 sys 0m5.532s 00:22:31.726 02:39:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:31.726 02:39:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.726 ************************************ 00:22:31.726 END TEST nvmf_target_disconnect_tc2 00:22:31.726 ************************************ 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.985 rmmod nvme_tcp 00:22:31.985 rmmod nvme_fabrics 00:22:31.985 rmmod nvme_keyring 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2404546 ']' 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2404546 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2404546 ']' 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 2404546 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2404546 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2404546' 00:22:31.985 killing process with pid 2404546 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 2404546 00:22:31.985 [2024-05-15 02:39:19.241116] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:31.985 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 2404546 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.242 02:39:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.776 02:39:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.776 00:22:34.776 real 0m17.002s 00:22:34.776 user 0m46.483s 00:22:34.776 sys 0m7.916s 00:22:34.776 02:39:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:34.776 02:39:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 ************************************ 00:22:34.776 END TEST nvmf_target_disconnect 00:22:34.776 ************************************ 00:22:34.776 02:39:21 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:22:34.776 02:39:21 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.776 02:39:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 02:39:21 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:22:34.776 00:22:34.776 real 16m51.028s 00:22:34.776 user 39m15.845s 00:22:34.776 sys 4m52.666s 00:22:34.776 02:39:21 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:34.776 02:39:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 ************************************ 00:22:34.776 END TEST nvmf_tcp 00:22:34.776 ************************************ 00:22:34.776 02:39:21 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:22:34.776 02:39:21 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:34.776 02:39:21 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:34.776 02:39:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:34.776 02:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 ************************************ 00:22:34.776 START TEST spdkcli_nvmf_tcp 00:22:34.776 ************************************ 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:34.776 * Looking for test storage... 00:22:34.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2405751 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2405751 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 2405751 ']' 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.776 02:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.776 [2024-05-15 02:39:21.792017] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:22:34.776 [2024-05-15 02:39:21.792116] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405751 ] 00:22:34.776 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.776 [2024-05-15 02:39:21.860635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:34.776 [2024-05-15 02:39:21.973954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.776 [2024-05-15 02:39:21.973977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.714 02:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:35.714 02:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:22:35.714 02:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:35.714 02:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.714 02:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.715 02:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:35.715 02:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:35.715 02:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:35.715 02:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:35.715 02:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.715 02:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:35.715 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:35.715 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:35.715 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:35.715 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:35.715 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:35.715 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:35.715 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:35.715 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:35.715 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:35.715 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:35.715 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:35.715 ' 00:22:38.248 [2024-05-15 02:39:25.333248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.184 [2024-05-15 02:39:26.569062] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:39.184 [2024-05-15 02:39:26.569669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:41.723 [2024-05-15 02:39:28.856839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:43.684 [2024-05-15 02:39:30.827060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:45.057 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:45.057 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:45.057 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:45.057 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:45.057 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:45.057 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:45.057 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:45.057 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:45.057 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:45.057 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:45.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:45.057 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:45.057 02:39:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:45.625 02:39:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.626 02:39:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:45.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:45.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:45.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:45.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:45.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:45.626 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:45.626 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:45.626 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:45.626 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:45.626 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:45.626 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:45.626 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:45.626 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:45.626 ' 00:22:50.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:50.902 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:50.902 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:50.902 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:50.902 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:50.902 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:50.902 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:50.902 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:50.902 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:50.902 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:50.902 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:50.902 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:50.902 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:50.902 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2405751 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2405751 ']' 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2405751 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2405751 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2405751' 00:22:50.902 killing process with pid 2405751 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 2405751 00:22:50.902 [2024-05-15 02:39:38.300157] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:50.902 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 2405751 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2405751 ']' 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2405751 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2405751 ']' 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2405751 00:22:51.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2405751) - No such process 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 2405751 is not found' 00:22:51.471 Process with pid 2405751 is not found 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:51.471 00:22:51.471 real 0m16.902s 00:22:51.471 user 0m35.843s 00:22:51.471 sys 0m0.898s 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:51.471 02:39:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.471 ************************************ 00:22:51.471 END TEST spdkcli_nvmf_tcp 00:22:51.471 ************************************ 00:22:51.471 02:39:38 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:51.471 02:39:38 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:51.471 02:39:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:51.471 02:39:38 -- common/autotest_common.sh@10 -- # set +x 00:22:51.471 ************************************ 00:22:51.471 START TEST nvmf_identify_passthru 00:22:51.471 ************************************ 00:22:51.471 02:39:38 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:51.471 * Looking for test storage... 00:22:51.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.471 02:39:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.471 02:39:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.471 02:39:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.471 02:39:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.471 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.471 02:39:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.471 02:39:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.471 02:39:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.471 02:39:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.471 02:39:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:51.472 02:39:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.472 02:39:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.472 02:39:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:51.472 02:39:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.472 02:39:38 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.472 02:39:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:54.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:54.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:54.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:54.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:22:54.004 00:22:54.004 --- 10.0.0.2 ping statistics --- 00:22:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.004 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:54.004 00:22:54.004 --- 10.0.0.1 ping statistics --- 00:22:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.004 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.004 02:39:41 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.004 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:54.004 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:54.004 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:54.004 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:54.004 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:22:54.004 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:22:54.004 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:22:54.005 02:39:41 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:22:54.005 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:22:54.005 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:22:54.005 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:22:54.005 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:54.005 02:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:54.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.198 02:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:22:58.198 02:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:22:58.198 02:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:58.198 02:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:58.198 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2410797 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.387 02:39:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2410797 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 2410797 ']' 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.387 02:39:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.387 [2024-05-15 02:39:49.792593] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:23:02.387 [2024-05-15 02:39:49.792671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.646 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.646 [2024-05-15 02:39:49.867371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.646 [2024-05-15 02:39:49.973547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.647 [2024-05-15 02:39:49.973598] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.647 [2024-05-15 02:39:49.973626] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.647 [2024-05-15 02:39:49.973637] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.647 [2024-05-15 02:39:49.973646] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.647 [2024-05-15 02:39:49.973729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.647 [2024-05-15 02:39:49.973794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.647 [2024-05-15 02:39:49.973861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.647 [2024-05-15 02:39:49.973863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:23:03.586 02:39:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 INFO: Log level set to 20 00:23:03.586 INFO: Requests: 00:23:03.586 { 00:23:03.586 "jsonrpc": "2.0", 00:23:03.586 "method": "nvmf_set_config", 00:23:03.586 "id": 1, 00:23:03.586 "params": { 00:23:03.586 "admin_cmd_passthru": { 00:23:03.586 "identify_ctrlr": true 00:23:03.586 } 00:23:03.586 } 00:23:03.586 } 00:23:03.586 00:23:03.586 INFO: response: 00:23:03.586 { 00:23:03.586 "jsonrpc": "2.0", 00:23:03.586 "id": 1, 00:23:03.586 "result": true 00:23:03.586 } 00:23:03.586 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.586 02:39:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 INFO: Setting log level to 20 00:23:03.586 INFO: Setting log level to 20 00:23:03.586 INFO: Log level set to 20 00:23:03.586 INFO: Log level set to 20 00:23:03.586 INFO: Requests: 00:23:03.586 { 00:23:03.586 "jsonrpc": "2.0", 00:23:03.586 "method": "framework_start_init", 00:23:03.586 "id": 1 00:23:03.586 } 00:23:03.586 00:23:03.586 INFO: Requests: 00:23:03.586 { 00:23:03.586 "jsonrpc": "2.0", 00:23:03.586 "method": "framework_start_init", 00:23:03.586 "id": 1 00:23:03.586 } 00:23:03.586 00:23:03.586 [2024-05-15 02:39:50.868349] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:03.586 INFO: response: 00:23:03.586 { 00:23:03.586 "jsonrpc": "2.0", 00:23:03.586 "id": 1, 00:23:03.586 "result": true 00:23:03.586 } 00:23:03.586 00:23:03.586 INFO: response: 00:23:03.586 { 00:23:03.586 "jsonrpc": "2.0", 00:23:03.586 "id": 1, 00:23:03.586 "result": true 00:23:03.586 } 00:23:03.586 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.586 02:39:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 INFO: Setting log level to 40 00:23:03.586 INFO: Setting log level to 40 00:23:03.586 INFO: Setting log level to 40 00:23:03.586 [2024-05-15 02:39:50.878535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.586 02:39:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 02:39:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.586 02:39:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 Nvme0n1 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 [2024-05-15 02:39:53.774519] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:06.930 [2024-05-15 02:39:53.774828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 [ 00:23:06.930 { 00:23:06.930 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.930 "subtype": "Discovery", 00:23:06.930 "listen_addresses": [], 00:23:06.930 "allow_any_host": true, 00:23:06.930 "hosts": [] 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.930 "subtype": "NVMe", 00:23:06.930 "listen_addresses": [ 00:23:06.930 { 00:23:06.930 "trtype": "TCP", 00:23:06.930 "adrfam": "IPv4", 00:23:06.930 "traddr": "10.0.0.2", 00:23:06.930 "trsvcid": "4420" 00:23:06.930 } 00:23:06.930 ], 00:23:06.930 "allow_any_host": true, 00:23:06.930 "hosts": [], 00:23:06.930 "serial_number": "SPDK00000000000001", 00:23:06.930 "model_number": "SPDK bdev Controller", 00:23:06.930 "max_namespaces": 1, 00:23:06.930 "min_cntlid": 1, 00:23:06.930 "max_cntlid": 65519, 00:23:06.930 "namespaces": [ 00:23:06.930 { 00:23:06.930 "nsid": 1, 00:23:06.930 "bdev_name": "Nvme0n1", 00:23:06.930 "name": "Nvme0n1", 00:23:06.930 "nguid": "58F10B9CE1A44DEBB7A362A59C63F884", 00:23:06.930 "uuid": "58f10b9c-e1a4-4deb-b7a3-62a59c63f884" 00:23:06.930 } 00:23:06.930 ] 00:23:06.930 } 00:23:06.930 ] 00:23:06.930 02:39:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:06.930 02:39:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:06.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:06.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:23:06.930 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.930 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.930 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.190 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:07.190 02:39:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.190 rmmod nvme_tcp 00:23:07.190 rmmod nvme_fabrics 00:23:07.190 rmmod nvme_keyring 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2410797 ']' 00:23:07.190 02:39:54 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2410797 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 2410797 ']' 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 2410797 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2410797 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2410797' 00:23:07.190 killing process with pid 2410797 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 2410797 00:23:07.190 [2024-05-15 02:39:54.433539] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:07.190 02:39:54 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 2410797 00:23:09.096 02:39:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.096 02:39:56 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.096 02:39:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.096 02:39:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.096 02:39:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.096 02:39:56 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.096 02:39:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:09.096 02:39:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.005 02:39:58 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.005 00:23:11.005 real 0m19.461s 00:23:11.005 user 0m30.962s 00:23:11.005 sys 0m2.694s 00:23:11.005 02:39:58 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:11.005 02:39:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:11.005 ************************************ 00:23:11.005 END TEST nvmf_identify_passthru 00:23:11.005 ************************************ 00:23:11.005 02:39:58 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:11.005 02:39:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:11.005 02:39:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:11.005 02:39:58 -- common/autotest_common.sh@10 -- # set +x 00:23:11.005 ************************************ 00:23:11.005 START TEST nvmf_dif 00:23:11.005 ************************************ 00:23:11.005 02:39:58 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:11.005 * Looking for test storage... 00:23:11.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.005 02:39:58 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.005 02:39:58 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.005 02:39:58 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.005 02:39:58 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.005 02:39:58 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.005 02:39:58 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.005 02:39:58 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.005 02:39:58 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:11.005 02:39:58 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.005 02:39:58 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:11.005 02:39:58 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:11.005 02:39:58 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:11.005 02:39:58 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:11.005 02:39:58 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.005 02:39:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:11.005 02:39:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.005 02:39:58 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.005 02:39:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:13.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:13.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:13.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:13.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.540 02:40:00 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:23:13.541 00:23:13.541 --- 10.0.0.2 ping statistics --- 00:23:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.541 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:23:13.541 00:23:13.541 --- 10.0.0.1 ping statistics --- 00:23:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.541 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:13.541 02:40:00 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:14.474 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:14.474 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:14.474 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:14.474 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:14.474 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:14.474 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:14.474 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:14.474 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:14.474 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:14.474 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:14.474 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:14.474 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:14.474 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:14.474 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:14.474 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:14.474 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:14.474 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:14.735 02:40:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:14.735 02:40:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2414540 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:14.735 02:40:02 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2414540 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 2414540 ']' 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.735 02:40:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:14.735 [2024-05-15 02:40:02.099330] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:23:14.735 [2024-05-15 02:40:02.099433] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.994 [2024-05-15 02:40:02.175426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.994 [2024-05-15 02:40:02.286546] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.994 [2024-05-15 02:40:02.286606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.994 [2024-05-15 02:40:02.286628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.994 [2024-05-15 02:40:02.286646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.994 [2024-05-15 02:40:02.286660] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.994 [2024-05-15 02:40:02.286701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.994 02:40:02 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.994 02:40:02 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:23:14.994 02:40:02 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.994 02:40:02 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.994 02:40:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 02:40:02 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.255 02:40:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:15.255 02:40:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:15.255 02:40:02 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.255 02:40:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 [2024-05-15 02:40:02.436859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.255 02:40:02 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.255 02:40:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:15.255 02:40:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:15.255 02:40:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:15.255 02:40:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 ************************************ 00:23:15.255 START TEST fio_dif_1_default 00:23:15.255 ************************************ 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 bdev_null0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:15.255 [2024-05-15 02:40:02.500990] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:15.255 [2024-05-15 02:40:02.501260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.255 { 00:23:15.255 "params": { 00:23:15.255 "name": "Nvme$subsystem", 00:23:15.255 "trtype": "$TEST_TRANSPORT", 00:23:15.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.255 "adrfam": "ipv4", 00:23:15.255 "trsvcid": "$NVMF_PORT", 00:23:15.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.255 "hdgst": ${hdgst:-false}, 00:23:15.255 "ddgst": ${ddgst:-false} 00:23:15.255 }, 00:23:15.255 "method": "bdev_nvme_attach_controller" 00:23:15.255 } 00:23:15.255 EOF 00:23:15.255 )") 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:15.255 "params": { 00:23:15.255 "name": "Nvme0", 00:23:15.255 "trtype": "tcp", 00:23:15.255 "traddr": "10.0.0.2", 00:23:15.255 "adrfam": "ipv4", 00:23:15.255 "trsvcid": "4420", 00:23:15.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.255 "hdgst": false, 00:23:15.255 "ddgst": false 00:23:15.255 }, 00:23:15.255 "method": "bdev_nvme_attach_controller" 00:23:15.255 }' 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:15.255 02:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.514 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:15.514 fio-3.35 00:23:15.514 Starting 1 thread 00:23:15.514 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.718 00:23:27.718 filename0: (groupid=0, jobs=1): err= 0: pid=2414734: Wed May 15 02:40:13 2024 00:23:27.718 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10021msec) 00:23:27.718 slat (nsec): min=4490, max=32428, avg=9960.35, stdev=3615.63 00:23:27.718 clat (usec): min=914, max=45256, avg=21564.11, stdev=20454.65 00:23:27.718 lat (usec): min=922, max=45277, avg=21574.07, stdev=20454.94 00:23:27.718 clat percentiles (usec): 00:23:27.718 | 1.00th=[ 963], 5.00th=[ 971], 10.00th=[ 988], 20.00th=[ 996], 00:23:27.718 | 30.00th=[ 1020], 40.00th=[ 1074], 50.00th=[41681], 60.00th=[41681], 00:23:27.718 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:23:27.718 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:23:27.718 | 99.99th=[45351] 00:23:27.718 bw ( KiB/s): min= 672, max= 768, per=99.89%, avg=740.80, stdev=33.28, samples=20 00:23:27.718 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:23:27.718 lat (usec) : 1000=20.80% 00:23:27.718 lat (msec) : 2=28.99%, 50=50.22% 00:23:27.718 cpu : usr=89.51%, sys=10.22%, ctx=15, majf=0, minf=234 00:23:27.718 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:27.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.718 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:27.718 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:27.718 00:23:27.718 Run status group 0 (all jobs): 00:23:27.718 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10021-10021msec 00:23:27.718 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:27.718 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:27.718 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 00:23:27.719 real 0m11.056s 00:23:27.719 user 0m10.208s 00:23:27.719 sys 0m1.288s 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 ************************************ 00:23:27.719 END TEST fio_dif_1_default 00:23:27.719 ************************************ 00:23:27.719 02:40:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:27.719 02:40:13 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:27.719 02:40:13 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 ************************************ 00:23:27.719 START TEST fio_dif_1_multi_subsystems 00:23:27.719 ************************************ 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 bdev_null0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 [2024-05-15 02:40:13.615806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 bdev_null1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:27.719 { 00:23:27.719 "params": { 00:23:27.719 "name": "Nvme$subsystem", 00:23:27.719 "trtype": "$TEST_TRANSPORT", 00:23:27.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.719 "adrfam": "ipv4", 00:23:27.719 "trsvcid": "$NVMF_PORT", 00:23:27.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.719 "hdgst": ${hdgst:-false}, 00:23:27.719 "ddgst": ${ddgst:-false} 00:23:27.719 }, 00:23:27.719 "method": "bdev_nvme_attach_controller" 00:23:27.719 } 00:23:27.719 EOF 00:23:27.719 )") 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:27.719 { 00:23:27.719 "params": { 00:23:27.719 "name": "Nvme$subsystem", 00:23:27.719 "trtype": "$TEST_TRANSPORT", 00:23:27.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.719 "adrfam": "ipv4", 00:23:27.719 "trsvcid": "$NVMF_PORT", 00:23:27.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.719 "hdgst": ${hdgst:-false}, 00:23:27.719 "ddgst": ${ddgst:-false} 00:23:27.719 }, 00:23:27.719 "method": "bdev_nvme_attach_controller" 00:23:27.719 } 00:23:27.719 EOF 00:23:27.719 )") 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:27.719 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:27.719 "params": { 00:23:27.719 "name": "Nvme0", 00:23:27.719 "trtype": "tcp", 00:23:27.719 "traddr": "10.0.0.2", 00:23:27.719 "adrfam": "ipv4", 00:23:27.719 "trsvcid": "4420", 00:23:27.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:27.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:27.719 "hdgst": false, 00:23:27.719 "ddgst": false 00:23:27.719 }, 00:23:27.719 "method": "bdev_nvme_attach_controller" 00:23:27.719 },{ 00:23:27.720 "params": { 00:23:27.720 "name": "Nvme1", 00:23:27.720 "trtype": "tcp", 00:23:27.720 "traddr": "10.0.0.2", 00:23:27.720 "adrfam": "ipv4", 00:23:27.720 "trsvcid": "4420", 00:23:27.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.720 "hdgst": false, 00:23:27.720 "ddgst": false 00:23:27.720 }, 00:23:27.720 "method": "bdev_nvme_attach_controller" 00:23:27.720 }' 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:27.720 02:40:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:27.720 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:27.720 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:27.720 fio-3.35 00:23:27.720 Starting 2 threads 00:23:27.720 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.710 00:23:37.710 filename0: (groupid=0, jobs=1): err= 0: pid=2416103: Wed May 15 02:40:24 2024 00:23:37.710 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10018msec) 00:23:37.710 slat (nsec): min=7270, max=67847, avg=9458.26, stdev=3940.99 00:23:37.710 clat (usec): min=938, max=42282, avg=21559.69, stdev=20430.74 00:23:37.710 lat (usec): min=946, max=42317, avg=21569.15, stdev=20430.52 00:23:37.710 clat percentiles (usec): 00:23:37.710 | 1.00th=[ 979], 5.00th=[ 996], 10.00th=[ 1012], 20.00th=[ 1029], 00:23:37.710 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[41157], 60.00th=[41681], 00:23:37.710 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:23:37.710 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:37.710 | 99.99th=[42206] 00:23:37.710 bw ( KiB/s): min= 672, max= 768, per=49.94%, avg=740.80, stdev=34.86, samples=20 00:23:37.710 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:23:37.710 lat (usec) : 1000=5.39% 00:23:37.710 lat (msec) : 2=44.40%, 50=50.22% 00:23:37.710 cpu : usr=94.53%, sys=5.17%, ctx=13, majf=0, minf=178 00:23:37.710 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.710 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.710 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:37.710 filename1: (groupid=0, jobs=1): err= 0: pid=2416104: Wed May 15 02:40:24 2024 00:23:37.710 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:23:37.710 slat (nsec): min=7217, max=30779, avg=9398.94, stdev=3435.20 00:23:37.710 clat (usec): min=950, max=43330, avg=21564.32, stdev=20387.74 00:23:37.710 lat (usec): min=958, max=43358, avg=21573.72, stdev=20387.35 00:23:37.710 clat percentiles (usec): 00:23:37.710 | 1.00th=[ 979], 5.00th=[ 1004], 10.00th=[ 1020], 20.00th=[ 1045], 00:23:37.710 | 30.00th=[ 1106], 40.00th=[ 1172], 50.00th=[41157], 60.00th=[41681], 00:23:37.710 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:23:37.710 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:23:37.710 | 99.99th=[43254] 00:23:37.710 bw ( KiB/s): min= 672, max= 768, per=49.94%, avg=740.80, stdev=34.86, samples=20 00:23:37.711 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:23:37.711 lat (usec) : 1000=3.72% 00:23:37.711 lat (msec) : 2=46.07%, 50=50.22% 00:23:37.711 cpu : usr=94.66%, sys=5.04%, ctx=12, majf=0, minf=134 00:23:37.711 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.711 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.711 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:37.711 00:23:37.711 Run status group 0 (all jobs): 00:23:37.711 READ: bw=1482KiB/s (1517kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=14.5MiB (15.2MB), run=10018-10020msec 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 00:23:37.711 real 0m11.267s 00:23:37.711 user 0m20.218s 00:23:37.711 sys 0m1.322s 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 ************************************ 00:23:37.711 END TEST fio_dif_1_multi_subsystems 00:23:37.711 ************************************ 00:23:37.711 02:40:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:37.711 02:40:24 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:37.711 02:40:24 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 ************************************ 00:23:37.711 START TEST fio_dif_rand_params 00:23:37.711 ************************************ 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 bdev_null0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.711 [2024-05-15 02:40:24.930476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.711 { 00:23:37.711 "params": { 00:23:37.711 "name": "Nvme$subsystem", 00:23:37.711 "trtype": "$TEST_TRANSPORT", 00:23:37.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.711 "adrfam": "ipv4", 00:23:37.711 "trsvcid": "$NVMF_PORT", 00:23:37.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.711 "hdgst": ${hdgst:-false}, 00:23:37.711 "ddgst": ${ddgst:-false} 00:23:37.711 }, 00:23:37.711 "method": "bdev_nvme_attach_controller" 00:23:37.711 } 00:23:37.711 EOF 00:23:37.711 )") 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:37.711 "params": { 00:23:37.711 "name": "Nvme0", 00:23:37.711 "trtype": "tcp", 00:23:37.711 "traddr": "10.0.0.2", 00:23:37.711 "adrfam": "ipv4", 00:23:37.711 "trsvcid": "4420", 00:23:37.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:37.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:37.711 "hdgst": false, 00:23:37.711 "ddgst": false 00:23:37.711 }, 00:23:37.711 "method": "bdev_nvme_attach_controller" 00:23:37.711 }' 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:37.711 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:37.712 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:37.712 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:37.712 02:40:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:37.970 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:37.970 ... 00:23:37.970 fio-3.35 00:23:37.970 Starting 3 threads 00:23:37.970 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.528 00:23:44.528 filename0: (groupid=0, jobs=1): err= 0: pid=2417501: Wed May 15 02:40:30 2024 00:23:44.528 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(127MiB/5019msec) 00:23:44.528 slat (nsec): min=4672, max=39547, avg=13737.50, stdev=4449.20 00:23:44.528 clat (usec): min=6193, max=92712, avg=14856.98, stdev=13781.00 00:23:44.528 lat (usec): min=6205, max=92725, avg=14870.71, stdev=13781.07 00:23:44.528 clat percentiles (usec): 00:23:44.528 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 8717], 00:23:44.528 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[11076], 00:23:44.528 | 70.00th=[11994], 80.00th=[13173], 90.00th=[49546], 95.00th=[51643], 00:23:44.528 | 99.00th=[54789], 99.50th=[56886], 99.90th=[91751], 99.95th=[92799], 00:23:44.528 | 99.99th=[92799] 00:23:44.528 bw ( KiB/s): min=17152, max=33024, per=35.00%, avg=25830.40, stdev=5635.17, samples=10 00:23:44.528 iops : min= 134, max= 258, avg=201.80, stdev=44.02, samples=10 00:23:44.528 lat (msec) : 10=43.77%, 20=45.65%, 50=0.99%, 100=9.58% 00:23:44.528 cpu : usr=92.65%, sys=6.88%, ctx=6, majf=0, minf=57 00:23:44.528 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.528 issued rwts: total=1012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:44.528 filename0: (groupid=0, jobs=1): err= 0: pid=2417502: Wed May 15 02:40:30 2024 00:23:44.528 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(117MiB/5006msec) 00:23:44.528 slat (nsec): min=4480, max=42760, avg=13138.90, stdev=4354.19 00:23:44.528 clat (usec): min=5928, max=95954, avg=16008.12, stdev=15162.28 00:23:44.528 lat (usec): min=5941, max=95981, avg=16021.26, stdev=15162.41 00:23:44.528 clat percentiles (usec): 00:23:44.528 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 8291], 00:23:44.528 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11338], 00:23:44.528 | 70.00th=[12649], 80.00th=[13960], 90.00th=[51119], 95.00th=[52691], 00:23:44.528 | 99.00th=[55313], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:23:44.528 | 99.99th=[95945] 00:23:44.528 bw ( KiB/s): min=14336, max=35584, per=32.40%, avg=23910.40, stdev=7295.08, samples=10 00:23:44.528 iops : min= 112, max= 278, avg=186.80, stdev=56.99, samples=10 00:23:44.528 lat (msec) : 10=41.62%, 20=45.46%, 50=0.85%, 100=12.06% 00:23:44.528 cpu : usr=91.93%, sys=7.53%, ctx=10, majf=0, minf=102 00:23:44.528 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.528 issued rwts: total=937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:44.528 filename0: (groupid=0, jobs=1): err= 0: pid=2417503: Wed May 15 02:40:30 2024 00:23:44.528 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(120MiB/5044msec) 00:23:44.528 slat (nsec): min=4222, max=61109, avg=15996.56, stdev=5236.22 00:23:44.528 clat (usec): min=5712, max=95862, avg=15668.23, stdev=14776.16 00:23:44.528 lat (usec): min=5725, max=95875, avg=15684.23, stdev=14776.00 00:23:44.528 clat percentiles (usec): 00:23:44.528 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 8848], 00:23:44.528 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11207], 00:23:44.528 | 70.00th=[12518], 80.00th=[13829], 90.00th=[51119], 95.00th=[53216], 00:23:44.528 | 99.00th=[55313], 99.50th=[90702], 99.90th=[95945], 99.95th=[95945], 00:23:44.528 | 99.99th=[95945] 00:23:44.528 bw ( KiB/s): min=16896, max=35328, per=33.24%, avg=24529.80, stdev=6250.04, samples=10 00:23:44.528 iops : min= 132, max= 276, avg=191.60, stdev=48.82, samples=10 00:23:44.528 lat (msec) : 10=42.02%, 20=45.57%, 50=1.25%, 100=11.16% 00:23:44.528 cpu : usr=92.29%, sys=7.22%, ctx=9, majf=0, minf=163 00:23:44.528 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.528 issued rwts: total=959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:44.528 00:23:44.528 Run status group 0 (all jobs): 00:23:44.528 READ: bw=72.1MiB/s (75.6MB/s), 23.4MiB/s-25.2MiB/s (24.5MB/s-26.4MB/s), io=364MiB (381MB), run=5006-5044msec 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.528 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 bdev_null0 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 [2024-05-15 02:40:31.132407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 bdev_null1 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 bdev_null2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.529 { 00:23:44.529 "params": { 00:23:44.529 "name": "Nvme$subsystem", 00:23:44.529 "trtype": "$TEST_TRANSPORT", 00:23:44.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.529 "adrfam": "ipv4", 00:23:44.529 "trsvcid": "$NVMF_PORT", 00:23:44.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.529 "hdgst": ${hdgst:-false}, 00:23:44.529 "ddgst": ${ddgst:-false} 00:23:44.529 }, 00:23:44.529 "method": "bdev_nvme_attach_controller" 00:23:44.529 } 00:23:44.529 EOF 00:23:44.529 )") 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.529 { 00:23:44.529 "params": { 00:23:44.529 "name": "Nvme$subsystem", 00:23:44.529 "trtype": "$TEST_TRANSPORT", 00:23:44.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.529 "adrfam": "ipv4", 00:23:44.529 "trsvcid": "$NVMF_PORT", 00:23:44.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.529 "hdgst": ${hdgst:-false}, 00:23:44.529 "ddgst": ${ddgst:-false} 00:23:44.529 }, 00:23:44.529 "method": "bdev_nvme_attach_controller" 00:23:44.529 } 00:23:44.529 EOF 00:23:44.529 )") 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.529 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.529 { 00:23:44.529 "params": { 00:23:44.529 "name": "Nvme$subsystem", 00:23:44.529 "trtype": "$TEST_TRANSPORT", 00:23:44.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.529 "adrfam": "ipv4", 00:23:44.529 "trsvcid": "$NVMF_PORT", 00:23:44.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.530 "hdgst": ${hdgst:-false}, 00:23:44.530 "ddgst": ${ddgst:-false} 00:23:44.530 }, 00:23:44.530 "method": "bdev_nvme_attach_controller" 00:23:44.530 } 00:23:44.530 EOF 00:23:44.530 )") 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:44.530 "params": { 00:23:44.530 "name": "Nvme0", 00:23:44.530 "trtype": "tcp", 00:23:44.530 "traddr": "10.0.0.2", 00:23:44.530 "adrfam": "ipv4", 00:23:44.530 "trsvcid": "4420", 00:23:44.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:44.530 "hdgst": false, 00:23:44.530 "ddgst": false 00:23:44.530 }, 00:23:44.530 "method": "bdev_nvme_attach_controller" 00:23:44.530 },{ 00:23:44.530 "params": { 00:23:44.530 "name": "Nvme1", 00:23:44.530 "trtype": "tcp", 00:23:44.530 "traddr": "10.0.0.2", 00:23:44.530 "adrfam": "ipv4", 00:23:44.530 "trsvcid": "4420", 00:23:44.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.530 "hdgst": false, 00:23:44.530 "ddgst": false 00:23:44.530 }, 00:23:44.530 "method": "bdev_nvme_attach_controller" 00:23:44.530 },{ 00:23:44.530 "params": { 00:23:44.530 "name": "Nvme2", 00:23:44.530 "trtype": "tcp", 00:23:44.530 "traddr": "10.0.0.2", 00:23:44.530 "adrfam": "ipv4", 00:23:44.530 "trsvcid": "4420", 00:23:44.530 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.530 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.530 "hdgst": false, 00:23:44.530 "ddgst": false 00:23:44.530 }, 00:23:44.530 "method": "bdev_nvme_attach_controller" 00:23:44.530 }' 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:44.530 02:40:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:44.530 ... 00:23:44.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:44.530 ... 00:23:44.530 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:44.530 ... 00:23:44.530 fio-3.35 00:23:44.530 Starting 24 threads 00:23:44.530 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.732 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418361: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=354, BW=1416KiB/s (1450kB/s)(14.1MiB/10166msec) 00:23:56.732 slat (usec): min=8, max=127, avg=33.94, stdev=12.01 00:23:56.732 clat (msec): min=31, max=580, avg=44.86, stdev=68.92 00:23:56.732 lat (msec): min=31, max=580, avg=44.89, stdev=68.92 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.732 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 43], 00:23:56.732 | 99.00th=[ 502], 99.50th=[ 527], 99.90th=[ 584], 99.95th=[ 584], 00:23:56.732 | 99.99th=[ 584] 00:23:56.732 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=1433.75, stdev=777.64, samples=20 00:23:56.732 iops : min= 32, max= 480, avg=358.40, stdev=194.40, samples=20 00:23:56.732 lat (msec) : 50=96.89%, 100=0.44%, 500=1.33%, 750=1.33% 00:23:56.732 cpu : usr=91.37%, sys=4.12%, ctx=155, majf=0, minf=32 00:23:56.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:56.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418362: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=356, BW=1425KiB/s (1459kB/s)(14.2MiB/10188msec) 00:23:56.732 slat (usec): min=8, max=160, avg=38.34, stdev=17.73 00:23:56.732 clat (msec): min=21, max=504, avg=44.57, stdev=57.26 00:23:56.732 lat (msec): min=21, max=504, avg=44.61, stdev=57.26 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.732 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 44], 00:23:56.732 | 99.00th=[ 321], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 506], 00:23:56.732 | 99.99th=[ 506] 00:23:56.732 bw ( KiB/s): min= 128, max= 1920, per=4.19%, avg=1445.60, stdev=757.42, samples=20 00:23:56.732 iops : min= 32, max= 480, avg=361.40, stdev=189.36, samples=20 00:23:56.732 lat (msec) : 50=95.98%, 100=0.50%, 500=3.47%, 750=0.06% 00:23:56.732 cpu : usr=98.13%, sys=1.31%, ctx=29, majf=0, minf=29 00:23:56.732 IO depths : 1=5.7%, 2=11.8%, 4=24.6%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:23:56.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 issued rwts: total=3630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418363: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=360, BW=1444KiB/s (1479kB/s)(14.4MiB/10189msec) 00:23:56.732 slat (usec): min=8, max=127, avg=31.81, stdev=12.76 00:23:56.732 clat (msec): min=16, max=426, avg=43.99, stdev=48.26 00:23:56.732 lat (msec): min=16, max=426, avg=44.02, stdev=48.25 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.732 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 47], 00:23:56.732 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 397], 99.95th=[ 426], 00:23:56.732 | 99.99th=[ 426] 00:23:56.732 bw ( KiB/s): min= 240, max= 1920, per=4.25%, avg=1464.80, stdev=723.55, samples=20 00:23:56.732 iops : min= 60, max= 480, avg=366.20, stdev=180.89, samples=20 00:23:56.732 lat (msec) : 20=0.05%, 50=95.00%, 100=0.60%, 250=1.52%, 500=2.83% 00:23:56.732 cpu : usr=90.68%, sys=4.01%, ctx=222, majf=0, minf=29 00:23:56.732 IO depths : 1=3.8%, 2=10.0%, 4=24.9%, 8=52.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:23:56.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 issued rwts: total=3678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418364: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.4MiB/10188msec) 00:23:56.732 slat (usec): min=8, max=113, avg=33.41, stdev=22.05 00:23:56.732 clat (msec): min=25, max=326, avg=43.77, stdev=45.59 00:23:56.732 lat (msec): min=25, max=326, avg=43.81, stdev=45.59 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.732 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 53], 00:23:56.732 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 317], 99.95th=[ 326], 00:23:56.732 | 99.99th=[ 326] 00:23:56.732 bw ( KiB/s): min= 240, max= 1920, per=4.27%, avg=1471.20, stdev=712.93, samples=20 00:23:56.732 iops : min= 60, max= 480, avg=367.80, stdev=178.23, samples=20 00:23:56.732 lat (msec) : 50=94.86%, 100=0.43%, 250=2.54%, 500=2.17% 00:23:56.732 cpu : usr=98.25%, sys=1.35%, ctx=16, majf=0, minf=26 00:23:56.732 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:23:56.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 issued rwts: total=3694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418365: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=358, BW=1434KiB/s (1469kB/s)(14.1MiB/10095msec) 00:23:56.732 slat (usec): min=8, max=122, avg=46.64, stdev=24.03 00:23:56.732 clat (msec): min=20, max=436, avg=44.26, stdev=56.03 00:23:56.732 lat (msec): min=20, max=436, avg=44.31, stdev=56.02 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.732 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 40], 95.00th=[ 45], 00:23:56.732 | 99.00th=[ 380], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:23:56.732 | 99.99th=[ 435] 00:23:56.732 bw ( KiB/s): min= 128, max= 1920, per=4.18%, avg=1441.60, stdev=753.51, samples=20 00:23:56.732 iops : min= 32, max= 480, avg=360.40, stdev=188.38, samples=20 00:23:56.732 lat (msec) : 50=95.41%, 100=1.05%, 250=0.83%, 500=2.71% 00:23:56.732 cpu : usr=98.23%, sys=1.36%, ctx=12, majf=0, minf=35 00:23:56.732 IO depths : 1=1.3%, 2=7.3%, 4=24.2%, 8=56.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:23:56.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 issued rwts: total=3620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418366: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=355, BW=1421KiB/s (1455kB/s)(14.1MiB/10172msec) 00:23:56.732 slat (usec): min=8, max=173, avg=41.03, stdev=31.91 00:23:56.732 clat (msec): min=16, max=543, avg=44.83, stdev=58.59 00:23:56.732 lat (msec): min=16, max=543, avg=44.87, stdev=58.59 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:23:56.732 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 41], 95.00th=[ 45], 00:23:56.732 | 99.00th=[ 380], 99.50th=[ 418], 99.90th=[ 542], 99.95th=[ 542], 00:23:56.732 | 99.99th=[ 542] 00:23:56.732 bw ( KiB/s): min= 128, max= 1952, per=4.17%, avg=1439.35, stdev=751.51, samples=20 00:23:56.732 iops : min= 32, max= 488, avg=359.80, stdev=187.87, samples=20 00:23:56.732 lat (msec) : 20=0.28%, 50=95.68%, 100=0.66%, 250=0.28%, 500=2.93% 00:23:56.732 lat (msec) : 750=0.17% 00:23:56.732 cpu : usr=98.18%, sys=1.27%, ctx=59, majf=0, minf=38 00:23:56.732 IO depths : 1=0.6%, 2=1.5%, 4=4.3%, 8=76.6%, 16=16.9%, 32=0.0%, >=64=0.0% 00:23:56.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 complete : 0=0.0%, 4=90.2%, 8=8.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.732 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.732 filename0: (groupid=0, jobs=1): err= 0: pid=2418367: Wed May 15 02:40:42 2024 00:23:56.732 read: IOPS=363, BW=1455KiB/s (1490kB/s)(14.5MiB/10200msec) 00:23:56.732 slat (nsec): min=5418, max=74232, avg=16811.75, stdev=10629.73 00:23:56.732 clat (msec): min=19, max=390, avg=43.81, stdev=45.86 00:23:56.732 lat (msec): min=19, max=390, avg=43.83, stdev=45.86 00:23:56.732 clat percentiles (msec): 00:23:56.732 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:23:56.732 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.732 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 49], 00:23:56.732 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 380], 99.95th=[ 393], 00:23:56.732 | 99.99th=[ 393] 00:23:56.732 bw ( KiB/s): min= 240, max= 1920, per=4.28%, avg=1477.60, stdev=715.96, samples=20 00:23:56.733 iops : min= 60, max= 480, avg=369.40, stdev=178.99, samples=20 00:23:56.733 lat (msec) : 20=0.86%, 50=94.39%, 100=0.05%, 250=2.53%, 500=2.16% 00:23:56.733 cpu : usr=98.16%, sys=1.41%, ctx=36, majf=0, minf=33 00:23:56.733 IO depths : 1=1.7%, 2=7.5%, 4=24.0%, 8=56.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename0: (groupid=0, jobs=1): err= 0: pid=2418368: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=365, BW=1460KiB/s (1495kB/s)(14.6MiB/10212msec) 00:23:56.733 slat (usec): min=8, max=534, avg=34.14, stdev=20.40 00:23:56.733 clat (msec): min=10, max=399, avg=43.50, stdev=48.16 00:23:56.733 lat (msec): min=10, max=399, avg=43.53, stdev=48.15 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:23:56.733 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 368], 99.95th=[ 401], 00:23:56.733 | 99.99th=[ 401] 00:23:56.733 bw ( KiB/s): min= 240, max= 2048, per=4.30%, avg=1484.80, stdev=734.05, samples=20 00:23:56.733 iops : min= 60, max= 512, avg=371.20, stdev=183.51, samples=20 00:23:56.733 lat (msec) : 20=1.02%, 50=94.47%, 100=0.21%, 250=1.29%, 500=3.00% 00:23:56.733 cpu : usr=89.69%, sys=4.76%, ctx=67, majf=0, minf=36 00:23:56.733 IO depths : 1=5.2%, 2=11.3%, 4=24.8%, 8=51.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418369: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=365, BW=1462KiB/s (1497kB/s)(14.6MiB/10204msec) 00:23:56.733 slat (usec): min=8, max=241, avg=32.01, stdev=28.15 00:23:56.733 clat (msec): min=10, max=302, avg=43.52, stdev=47.97 00:23:56.733 lat (msec): min=10, max=302, avg=43.55, stdev=47.96 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 32], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 43], 95.00th=[ 47], 00:23:56.733 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 305], 00:23:56.733 | 99.99th=[ 305] 00:23:56.733 bw ( KiB/s): min= 256, max= 2000, per=4.31%, avg=1485.60, stdev=731.08, samples=20 00:23:56.733 iops : min= 64, max= 500, avg=371.40, stdev=182.77, samples=20 00:23:56.733 lat (msec) : 20=1.53%, 50=93.75%, 100=0.43%, 250=1.29%, 500=3.00% 00:23:56.733 cpu : usr=94.85%, sys=2.63%, ctx=100, majf=0, minf=37 00:23:56.733 IO depths : 1=2.5%, 2=7.9%, 4=22.5%, 8=56.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418370: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=356, BW=1427KiB/s (1461kB/s)(14.2MiB/10176msec) 00:23:56.733 slat (usec): min=9, max=169, avg=42.34, stdev=22.23 00:23:56.733 clat (msec): min=25, max=504, avg=44.42, stdev=57.60 00:23:56.733 lat (msec): min=25, max=504, avg=44.46, stdev=57.59 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 44], 00:23:56.733 | 99.00th=[ 321], 99.50th=[ 464], 99.90th=[ 481], 99.95th=[ 506], 00:23:56.733 | 99.99th=[ 506] 00:23:56.733 bw ( KiB/s): min= 128, max= 1920, per=4.19%, avg=1445.40, stdev=756.94, samples=20 00:23:56.733 iops : min= 32, max= 480, avg=361.35, stdev=189.23, samples=20 00:23:56.733 lat (msec) : 50=96.09%, 100=0.44%, 500=3.42%, 750=0.06% 00:23:56.733 cpu : usr=97.86%, sys=1.44%, ctx=26, majf=0, minf=36 00:23:56.733 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418371: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=356, BW=1426KiB/s (1461kB/s)(14.2MiB/10179msec) 00:23:56.733 slat (usec): min=9, max=127, avg=45.50, stdev=23.20 00:23:56.733 clat (msec): min=23, max=504, avg=44.44, stdev=57.78 00:23:56.733 lat (msec): min=24, max=504, avg=44.49, stdev=57.77 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:23:56.733 | 99.00th=[ 321], 99.50th=[ 464], 99.90th=[ 481], 99.95th=[ 506], 00:23:56.733 | 99.99th=[ 506] 00:23:56.733 bw ( KiB/s): min= 128, max= 1920, per=4.19%, avg=1445.60, stdev=757.16, samples=20 00:23:56.733 iops : min= 32, max= 480, avg=361.40, stdev=189.29, samples=20 00:23:56.733 lat (msec) : 50=96.09%, 100=0.44%, 250=0.06%, 500=3.36%, 750=0.06% 00:23:56.733 cpu : usr=94.98%, sys=2.73%, ctx=111, majf=0, minf=32 00:23:56.733 IO depths : 1=4.1%, 2=9.9%, 4=23.3%, 8=53.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418372: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=361, BW=1446KiB/s (1480kB/s)(14.4MiB/10182msec) 00:23:56.733 slat (usec): min=8, max=122, avg=35.07, stdev=19.90 00:23:56.733 clat (msec): min=16, max=462, avg=44.01, stdev=48.27 00:23:56.733 lat (msec): min=16, max=462, avg=44.04, stdev=48.27 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 57], 00:23:56.733 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 464], 99.95th=[ 464], 00:23:56.733 | 99.99th=[ 464] 00:23:56.733 bw ( KiB/s): min= 176, max= 1936, per=4.25%, avg=1465.60, stdev=722.61, samples=20 00:23:56.733 iops : min= 44, max= 484, avg=366.40, stdev=180.65, samples=20 00:23:56.733 lat (msec) : 20=0.79%, 50=93.40%, 100=1.41%, 250=2.34%, 500=2.07% 00:23:56.733 cpu : usr=98.35%, sys=1.24%, ctx=15, majf=0, minf=29 00:23:56.733 IO depths : 1=0.6%, 2=6.0%, 4=22.5%, 8=58.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=93.9%, 8=0.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418373: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.4MiB/10189msec) 00:23:56.733 slat (usec): min=8, max=117, avg=33.03, stdev=16.83 00:23:56.733 clat (msec): min=25, max=316, avg=43.76, stdev=45.52 00:23:56.733 lat (msec): min=25, max=316, avg=43.80, stdev=45.52 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 53], 00:23:56.733 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:23:56.733 | 99.99th=[ 317] 00:23:56.733 bw ( KiB/s): min= 240, max= 1920, per=4.27%, avg=1471.20, stdev=712.65, samples=20 00:23:56.733 iops : min= 60, max= 480, avg=367.80, stdev=178.16, samples=20 00:23:56.733 lat (msec) : 50=94.86%, 100=0.43%, 250=2.49%, 500=2.22% 00:23:56.733 cpu : usr=98.43%, sys=1.15%, ctx=13, majf=0, minf=32 00:23:56.733 IO depths : 1=3.3%, 2=9.5%, 4=24.8%, 8=53.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418374: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=363, BW=1455KiB/s (1490kB/s)(14.5MiB/10205msec) 00:23:56.733 slat (usec): min=8, max=121, avg=33.64, stdev=23.32 00:23:56.733 clat (msec): min=10, max=363, avg=43.67, stdev=48.07 00:23:56.733 lat (msec): min=10, max=363, avg=43.70, stdev=48.07 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 44], 00:23:56.733 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 363], 00:23:56.733 | 99.99th=[ 363] 00:23:56.733 bw ( KiB/s): min= 240, max= 1920, per=4.29%, avg=1478.40, stdev=727.04, samples=20 00:23:56.733 iops : min= 60, max= 480, avg=369.60, stdev=181.76, samples=20 00:23:56.733 lat (msec) : 20=0.43%, 50=95.26%, 250=1.24%, 500=3.07% 00:23:56.733 cpu : usr=94.22%, sys=3.22%, ctx=158, majf=0, minf=35 00:23:56.733 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:23:56.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.733 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.733 filename1: (groupid=0, jobs=1): err= 0: pid=2418375: Wed May 15 02:40:42 2024 00:23:56.733 read: IOPS=355, BW=1423KiB/s (1457kB/s)(14.1MiB/10167msec) 00:23:56.733 slat (usec): min=8, max=103, avg=37.56, stdev=19.59 00:23:56.733 clat (msec): min=17, max=476, avg=44.70, stdev=57.40 00:23:56.733 lat (msec): min=17, max=476, avg=44.74, stdev=57.40 00:23:56.733 clat percentiles (msec): 00:23:56.733 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.733 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.733 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 40], 95.00th=[ 45], 00:23:56.733 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 477], 00:23:56.733 | 99.99th=[ 477] 00:23:56.734 bw ( KiB/s): min= 128, max= 1920, per=4.18%, avg=1440.15, stdev=751.66, samples=20 00:23:56.734 iops : min= 32, max= 480, avg=360.00, stdev=187.90, samples=20 00:23:56.734 lat (msec) : 20=0.03%, 50=95.77%, 100=0.66%, 250=0.44%, 500=3.10% 00:23:56.734 cpu : usr=98.08%, sys=1.49%, ctx=20, majf=0, minf=43 00:23:56.734 IO depths : 1=0.2%, 2=5.9%, 4=23.7%, 8=57.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=94.3%, 8=0.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename1: (groupid=0, jobs=1): err= 0: pid=2418376: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=361, BW=1446KiB/s (1481kB/s)(14.4MiB/10188msec) 00:23:56.734 slat (usec): min=8, max=223, avg=46.49, stdev=29.08 00:23:56.734 clat (msec): min=17, max=425, avg=43.85, stdev=48.21 00:23:56.734 lat (msec): min=17, max=425, avg=43.89, stdev=48.20 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.734 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 42], 95.00th=[ 56], 00:23:56.734 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 426], 00:23:56.734 | 99.99th=[ 426] 00:23:56.734 bw ( KiB/s): min= 240, max= 1968, per=4.25%, avg=1467.20, stdev=723.89, samples=20 00:23:56.734 iops : min= 60, max= 492, avg=366.80, stdev=180.97, samples=20 00:23:56.734 lat (msec) : 20=1.25%, 50=93.54%, 100=0.87%, 250=1.47%, 500=2.88% 00:23:56.734 cpu : usr=97.12%, sys=1.87%, ctx=89, majf=0, minf=26 00:23:56.734 IO depths : 1=1.2%, 2=7.0%, 4=24.4%, 8=56.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418377: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=357, BW=1431KiB/s (1466kB/s)(14.2MiB/10190msec) 00:23:56.734 slat (nsec): min=7691, max=79402, avg=29664.05, stdev=14788.46 00:23:56.734 clat (msec): min=28, max=478, avg=44.42, stdev=54.93 00:23:56.734 lat (msec): min=28, max=478, avg=44.45, stdev=54.93 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.734 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 44], 00:23:56.734 | 99.00th=[ 380], 99.50th=[ 426], 99.90th=[ 430], 99.95th=[ 481], 00:23:56.734 | 99.99th=[ 481] 00:23:56.734 bw ( KiB/s): min= 128, max= 1920, per=4.21%, avg=1452.00, stdev=748.04, samples=20 00:23:56.734 iops : min= 32, max= 480, avg=363.00, stdev=187.01, samples=20 00:23:56.734 lat (msec) : 50=95.67%, 100=0.44%, 250=1.26%, 500=2.63% 00:23:56.734 cpu : usr=98.43%, sys=1.19%, ctx=15, majf=0, minf=30 00:23:56.734 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418378: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=362, BW=1451KiB/s (1486kB/s)(14.4MiB/10198msec) 00:23:56.734 slat (nsec): min=8417, max=94977, avg=26509.26, stdev=13156.75 00:23:56.734 clat (msec): min=12, max=466, avg=43.64, stdev=49.20 00:23:56.734 lat (msec): min=12, max=466, avg=43.66, stdev=49.20 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:23:56.734 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 44], 00:23:56.734 | 99.00th=[ 296], 99.50th=[ 317], 99.90th=[ 372], 99.95th=[ 468], 00:23:56.734 | 99.99th=[ 468] 00:23:56.734 bw ( KiB/s): min= 192, max= 1920, per=4.27%, avg=1473.20, stdev=733.46, samples=20 00:23:56.734 iops : min= 48, max= 480, avg=368.30, stdev=183.37, samples=20 00:23:56.734 lat (msec) : 20=0.35%, 50=95.59%, 250=0.65%, 500=3.41% 00:23:56.734 cpu : usr=97.43%, sys=1.74%, ctx=89, majf=0, minf=36 00:23:56.734 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418379: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=355, BW=1421KiB/s (1455kB/s)(14.1MiB/10173msec) 00:23:56.734 slat (nsec): min=8182, max=97415, avg=31227.74, stdev=16269.21 00:23:56.734 clat (msec): min=18, max=463, avg=44.83, stdev=57.01 00:23:56.734 lat (msec): min=18, max=463, avg=44.86, stdev=57.01 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:23:56.734 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 41], 95.00th=[ 48], 00:23:56.734 | 99.00th=[ 347], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 464], 00:23:56.734 | 99.99th=[ 464] 00:23:56.734 bw ( KiB/s): min= 152, max= 1920, per=4.17%, avg=1438.80, stdev=744.52, samples=20 00:23:56.734 iops : min= 38, max= 480, avg=359.70, stdev=186.13, samples=20 00:23:56.734 lat (msec) : 20=0.17%, 50=95.54%, 100=0.75%, 250=0.28%, 500=3.27% 00:23:56.734 cpu : usr=97.35%, sys=2.09%, ctx=22, majf=0, minf=26 00:23:56.734 IO depths : 1=1.9%, 2=4.8%, 4=13.0%, 8=66.8%, 16=13.6%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=91.9%, 8=5.2%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418380: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=355, BW=1421KiB/s (1456kB/s)(14.1MiB/10181msec) 00:23:56.734 slat (usec): min=8, max=169, avg=50.69, stdev=32.03 00:23:56.734 clat (msec): min=17, max=464, avg=44.66, stdev=56.26 00:23:56.734 lat (msec): min=17, max=464, avg=44.71, stdev=56.26 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.734 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 50], 00:23:56.734 | 99.00th=[ 321], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 464], 00:23:56.734 | 99.99th=[ 464] 00:23:56.734 bw ( KiB/s): min= 128, max= 1968, per=4.18%, avg=1440.80, stdev=738.35, samples=20 00:23:56.734 iops : min= 32, max= 492, avg=360.20, stdev=184.59, samples=20 00:23:56.734 lat (msec) : 20=0.36%, 50=94.69%, 100=1.35%, 250=0.50%, 500=3.10% 00:23:56.734 cpu : usr=98.44%, sys=1.11%, ctx=19, majf=0, minf=56 00:23:56.734 IO depths : 1=0.5%, 2=5.9%, 4=22.1%, 8=59.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=93.8%, 8=1.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418381: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=360, BW=1443KiB/s (1478kB/s)(14.4MiB/10189msec) 00:23:56.734 slat (usec): min=8, max=114, avg=33.31, stdev=12.37 00:23:56.734 clat (msec): min=24, max=460, avg=43.95, stdev=49.36 00:23:56.734 lat (msec): min=24, max=460, avg=43.98, stdev=49.36 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.734 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 45], 00:23:56.734 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 460], 99.95th=[ 460], 00:23:56.734 | 99.99th=[ 460] 00:23:56.734 bw ( KiB/s): min= 176, max= 1920, per=4.24%, avg=1464.00, stdev=725.26, samples=20 00:23:56.734 iops : min= 44, max= 480, avg=366.00, stdev=181.31, samples=20 00:23:56.734 lat (msec) : 50=95.21%, 100=0.49%, 250=1.58%, 500=2.72% 00:23:56.734 cpu : usr=90.97%, sys=4.09%, ctx=73, majf=0, minf=28 00:23:56.734 IO depths : 1=3.6%, 2=9.5%, 4=24.0%, 8=54.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418382: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=365, BW=1464KiB/s (1499kB/s)(14.6MiB/10203msec) 00:23:56.734 slat (usec): min=8, max=150, avg=31.66, stdev=23.50 00:23:56.734 clat (msec): min=10, max=380, avg=43.47, stdev=45.41 00:23:56.734 lat (msec): min=10, max=380, avg=43.50, stdev=45.41 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.734 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 43], 95.00th=[ 58], 00:23:56.734 | 99.00th=[ 257], 99.50th=[ 279], 99.90th=[ 380], 99.95th=[ 380], 00:23:56.734 | 99.99th=[ 380] 00:23:56.734 bw ( KiB/s): min= 256, max= 2048, per=4.31%, avg=1487.20, stdev=715.00, samples=20 00:23:56.734 iops : min= 64, max= 512, avg=371.80, stdev=178.75, samples=20 00:23:56.734 lat (msec) : 20=2.68%, 50=91.54%, 100=1.07%, 250=2.84%, 500=1.87% 00:23:56.734 cpu : usr=94.78%, sys=2.82%, ctx=59, majf=0, minf=30 00:23:56.734 IO depths : 1=2.4%, 2=7.7%, 4=23.0%, 8=56.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:56.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.734 issued rwts: total=3734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.734 filename2: (groupid=0, jobs=1): err= 0: pid=2418383: Wed May 15 02:40:42 2024 00:23:56.734 read: IOPS=362, BW=1451KiB/s (1486kB/s)(14.4MiB/10188msec) 00:23:56.734 slat (usec): min=8, max=151, avg=31.19, stdev=28.31 00:23:56.734 clat (msec): min=23, max=315, avg=43.87, stdev=45.51 00:23:56.734 lat (msec): min=23, max=315, avg=43.90, stdev=45.50 00:23:56.734 clat percentiles (msec): 00:23:56.734 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:23:56.734 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.734 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 53], 00:23:56.735 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:23:56.735 | 99.99th=[ 317] 00:23:56.735 bw ( KiB/s): min= 256, max= 1920, per=4.27%, avg=1472.00, stdev=711.33, samples=20 00:23:56.735 iops : min= 64, max= 480, avg=368.00, stdev=177.83, samples=20 00:23:56.735 lat (msec) : 50=94.81%, 100=0.43%, 250=2.60%, 500=2.16% 00:23:56.735 cpu : usr=98.36%, sys=1.23%, ctx=17, majf=0, minf=44 00:23:56.735 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:23:56.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.735 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.735 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.735 filename2: (groupid=0, jobs=1): err= 0: pid=2418384: Wed May 15 02:40:42 2024 00:23:56.735 read: IOPS=362, BW=1451KiB/s (1486kB/s)(14.4MiB/10189msec) 00:23:56.735 slat (nsec): min=8430, max=90279, avg=29714.13, stdev=13368.42 00:23:56.735 clat (msec): min=28, max=316, avg=43.85, stdev=45.53 00:23:56.735 lat (msec): min=28, max=316, avg=43.88, stdev=45.53 00:23:56.735 clat percentiles (msec): 00:23:56.735 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:23:56.735 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:23:56.735 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 54], 00:23:56.735 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:23:56.735 | 99.99th=[ 317] 00:23:56.735 bw ( KiB/s): min= 240, max= 1920, per=4.27%, avg=1472.00, stdev=711.48, samples=20 00:23:56.735 iops : min= 60, max= 480, avg=368.00, stdev=177.87, samples=20 00:23:56.735 lat (msec) : 50=94.81%, 100=0.43%, 250=2.60%, 500=2.16% 00:23:56.735 cpu : usr=98.35%, sys=1.23%, ctx=18, majf=0, minf=30 00:23:56.735 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:23:56.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.735 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.735 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.735 00:23:56.735 Run status group 0 (all jobs): 00:23:56.735 READ: bw=33.7MiB/s (35.3MB/s), 1416KiB/s-1464KiB/s (1450kB/s-1499kB/s), io=344MiB (361MB), run=10095-10212msec 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 bdev_null0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 [2024-05-15 02:40:43.094496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 bdev_null1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.735 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.735 { 00:23:56.735 "params": { 00:23:56.735 "name": "Nvme$subsystem", 00:23:56.735 "trtype": "$TEST_TRANSPORT", 00:23:56.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.735 "adrfam": "ipv4", 00:23:56.735 "trsvcid": "$NVMF_PORT", 00:23:56.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.735 "hdgst": ${hdgst:-false}, 00:23:56.735 "ddgst": ${ddgst:-false} 00:23:56.735 }, 00:23:56.735 "method": "bdev_nvme_attach_controller" 00:23:56.735 } 00:23:56.736 EOF 00:23:56.736 )") 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.736 { 00:23:56.736 "params": { 00:23:56.736 "name": "Nvme$subsystem", 00:23:56.736 "trtype": "$TEST_TRANSPORT", 00:23:56.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.736 "adrfam": "ipv4", 00:23:56.736 "trsvcid": "$NVMF_PORT", 00:23:56.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.736 "hdgst": ${hdgst:-false}, 00:23:56.736 "ddgst": ${ddgst:-false} 00:23:56.736 }, 00:23:56.736 "method": "bdev_nvme_attach_controller" 00:23:56.736 } 00:23:56.736 EOF 00:23:56.736 )") 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:56.736 "params": { 00:23:56.736 "name": "Nvme0", 00:23:56.736 "trtype": "tcp", 00:23:56.736 "traddr": "10.0.0.2", 00:23:56.736 "adrfam": "ipv4", 00:23:56.736 "trsvcid": "4420", 00:23:56.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:56.736 "hdgst": false, 00:23:56.736 "ddgst": false 00:23:56.736 }, 00:23:56.736 "method": "bdev_nvme_attach_controller" 00:23:56.736 },{ 00:23:56.736 "params": { 00:23:56.736 "name": "Nvme1", 00:23:56.736 "trtype": "tcp", 00:23:56.736 "traddr": "10.0.0.2", 00:23:56.736 "adrfam": "ipv4", 00:23:56.736 "trsvcid": "4420", 00:23:56.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.736 "hdgst": false, 00:23:56.736 "ddgst": false 00:23:56.736 }, 00:23:56.736 "method": "bdev_nvme_attach_controller" 00:23:56.736 }' 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:56.736 02:40:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.736 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.736 ... 00:23:56.736 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.736 ... 00:23:56.736 fio-3.35 00:23:56.736 Starting 4 threads 00:23:56.736 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.995 00:24:01.995 filename0: (groupid=0, jobs=1): err= 0: pid=2419773: Wed May 15 02:40:49 2024 00:24:01.995 read: IOPS=1967, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5001msec) 00:24:01.995 slat (nsec): min=7200, max=76527, avg=13679.45, stdev=6565.25 00:24:01.995 clat (usec): min=1314, max=7427, avg=4024.08, stdev=500.69 00:24:01.995 lat (usec): min=1346, max=7463, avg=4037.76, stdev=500.51 00:24:01.995 clat percentiles (usec): 00:24:01.995 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3621], 20.00th=[ 3785], 00:24:01.995 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:24:01.995 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 4883], 00:24:01.995 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 6980], 00:24:01.995 | 99.99th=[ 7439] 00:24:01.995 bw ( KiB/s): min=15168, max=16080, per=25.47%, avg=15729.78, stdev=339.08, samples=9 00:24:01.995 iops : min= 1896, max= 2010, avg=1966.22, stdev=42.38, samples=9 00:24:01.995 lat (msec) : 2=0.13%, 4=48.39%, 10=51.48% 00:24:01.995 cpu : usr=95.58%, sys=3.92%, ctx=10, majf=0, minf=9 00:24:01.995 IO depths : 1=0.1%, 2=1.4%, 4=71.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 issued rwts: total=9841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.995 filename0: (groupid=0, jobs=1): err= 0: pid=2419774: Wed May 15 02:40:49 2024 00:24:01.995 read: IOPS=1958, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5003msec) 00:24:01.995 slat (nsec): min=4034, max=48974, avg=12699.13, stdev=4569.86 00:24:01.995 clat (usec): min=2089, max=7755, avg=4046.43, stdev=527.21 00:24:01.995 lat (usec): min=2098, max=7764, avg=4059.12, stdev=527.08 00:24:01.995 clat percentiles (usec): 00:24:01.995 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3752], 00:24:01.995 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:24:01.995 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 5145], 00:24:01.995 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7439], 99.95th=[ 7570], 00:24:01.995 | 99.99th=[ 7767] 00:24:01.995 bw ( KiB/s): min=15152, max=15968, per=25.37%, avg=15668.80, stdev=244.53, samples=10 00:24:01.995 iops : min= 1894, max= 1996, avg=1958.60, stdev=30.57, samples=10 00:24:01.995 lat (msec) : 4=46.63%, 10=53.37% 00:24:01.995 cpu : usr=92.88%, sys=5.58%, ctx=111, majf=0, minf=0 00:24:01.995 IO depths : 1=0.1%, 2=1.3%, 4=70.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 issued rwts: total=9798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.995 filename1: (groupid=0, jobs=1): err= 0: pid=2419775: Wed May 15 02:40:49 2024 00:24:01.995 read: IOPS=1957, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5002msec) 00:24:01.995 slat (nsec): min=4107, max=51266, avg=12913.97, stdev=4861.23 00:24:01.995 clat (usec): min=2125, max=7040, avg=4046.77, stdev=560.93 00:24:01.995 lat (usec): min=2134, max=7054, avg=4059.69, stdev=561.05 00:24:01.995 clat percentiles (usec): 00:24:01.995 | 1.00th=[ 2769], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3785], 00:24:01.995 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:24:01.995 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 5407], 00:24:01.995 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6718], 99.95th=[ 6980], 00:24:01.995 | 99.99th=[ 7046] 00:24:01.995 bw ( KiB/s): min=15120, max=15968, per=25.31%, avg=15633.78, stdev=294.53, samples=9 00:24:01.995 iops : min= 1890, max= 1996, avg=1954.22, stdev=36.82, samples=9 00:24:01.995 lat (msec) : 4=49.80%, 10=50.20% 00:24:01.995 cpu : usr=93.50%, sys=4.58%, ctx=212, majf=0, minf=0 00:24:01.995 IO depths : 1=0.1%, 2=2.2%, 4=70.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 issued rwts: total=9790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.995 filename1: (groupid=0, jobs=1): err= 0: pid=2419776: Wed May 15 02:40:49 2024 00:24:01.995 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(74.2MiB/5042msec) 00:24:01.995 slat (nsec): min=3961, max=61833, avg=12945.95, stdev=5300.70 00:24:01.995 clat (usec): min=2325, max=47461, avg=4187.15, stdev=1532.41 00:24:01.995 lat (usec): min=2333, max=47473, avg=4200.10, stdev=1532.18 00:24:01.995 clat percentiles (usec): 00:24:01.995 | 1.00th=[ 3097], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3851], 00:24:01.995 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:24:01.995 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4817], 95.00th=[ 5604], 00:24:01.995 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[42206], 99.95th=[47449], 00:24:01.995 | 99.99th=[47449] 00:24:01.995 bw ( KiB/s): min=14128, max=15440, per=24.58%, avg=15182.40, stdev=384.26, samples=10 00:24:01.995 iops : min= 1766, max= 1930, avg=1897.80, stdev=48.03, samples=10 00:24:01.995 lat (msec) : 4=43.63%, 10=56.26%, 50=0.12% 00:24:01.995 cpu : usr=93.83%, sys=4.92%, ctx=122, majf=0, minf=9 00:24:01.995 IO depths : 1=0.3%, 2=2.3%, 4=70.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.995 issued rwts: total=9492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.995 00:24:01.995 Run status group 0 (all jobs): 00:24:01.995 READ: bw=60.3MiB/s (63.2MB/s), 14.7MiB/s-15.4MiB/s (15.4MB/s-16.1MB/s), io=304MiB (319MB), run=5001-5042msec 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.995 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.253 00:24:02.253 real 0m24.531s 00:24:02.253 user 4m32.883s 00:24:02.253 sys 0m8.227s 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:02.253 02:40:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.253 ************************************ 00:24:02.253 END TEST fio_dif_rand_params 00:24:02.253 ************************************ 00:24:02.253 02:40:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:02.253 02:40:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:02.253 02:40:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:02.253 02:40:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 ************************************ 00:24:02.254 START TEST fio_dif_digest 00:24:02.254 ************************************ 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 bdev_null0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 [2024-05-15 02:40:49.510875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.254 { 00:24:02.254 "params": { 00:24:02.254 "name": "Nvme$subsystem", 00:24:02.254 "trtype": "$TEST_TRANSPORT", 00:24:02.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.254 "adrfam": "ipv4", 00:24:02.254 "trsvcid": "$NVMF_PORT", 00:24:02.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.254 "hdgst": ${hdgst:-false}, 00:24:02.254 "ddgst": ${ddgst:-false} 00:24:02.254 }, 00:24:02.254 "method": "bdev_nvme_attach_controller" 00:24:02.254 } 00:24:02.254 EOF 00:24:02.254 )") 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:02.254 "params": { 00:24:02.254 "name": "Nvme0", 00:24:02.254 "trtype": "tcp", 00:24:02.254 "traddr": "10.0.0.2", 00:24:02.254 "adrfam": "ipv4", 00:24:02.254 "trsvcid": "4420", 00:24:02.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.254 "hdgst": true, 00:24:02.254 "ddgst": true 00:24:02.254 }, 00:24:02.254 "method": "bdev_nvme_attach_controller" 00:24:02.254 }' 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:02.254 02:40:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.512 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:02.512 ... 00:24:02.512 fio-3.35 00:24:02.512 Starting 3 threads 00:24:02.512 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.790 00:24:14.790 filename0: (groupid=0, jobs=1): err= 0: pid=2420600: Wed May 15 02:41:00 2024 00:24:14.790 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(283MiB/10048msec) 00:24:14.790 slat (nsec): min=5358, max=79123, avg=21769.61, stdev=6159.66 00:24:14.790 clat (usec): min=6438, max=95126, avg=13291.50, stdev=6785.45 00:24:14.790 lat (usec): min=6453, max=95146, avg=13313.27, stdev=6786.06 00:24:14.790 clat percentiles (usec): 00:24:14.790 | 1.00th=[ 6980], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10290], 00:24:14.790 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12518], 60.00th=[13304], 00:24:14.790 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15270], 95.00th=[15926], 00:24:14.790 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:24:14.790 | 99.99th=[94897] 00:24:14.790 bw ( KiB/s): min=25344, max=34560, per=40.11%, avg=28902.40, stdev=2762.08, samples=20 00:24:14.790 iops : min= 198, max= 270, avg=225.80, stdev=21.58, samples=20 00:24:14.790 lat (msec) : 10=13.85%, 20=83.72%, 50=0.04%, 100=2.39% 00:24:14.790 cpu : usr=92.56%, sys=6.68%, ctx=33, majf=0, minf=249 00:24:14.790 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.790 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.790 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.791 filename0: (groupid=0, jobs=1): err= 0: pid=2420601: Wed May 15 02:41:00 2024 00:24:14.791 read: IOPS=220, BW=27.5MiB/s (28.8MB/s)(276MiB/10047msec) 00:24:14.791 slat (nsec): min=5374, max=51392, avg=15576.24, stdev=4747.59 00:24:14.791 clat (usec): min=6291, max=95630, avg=13593.33, stdev=7123.76 00:24:14.791 lat (usec): min=6304, max=95646, avg=13608.90, stdev=7123.99 00:24:14.791 clat percentiles (usec): 00:24:14.791 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 9634], 20.00th=[10421], 00:24:14.791 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12911], 60.00th=[13960], 00:24:14.791 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16057], 95.00th=[16909], 00:24:14.791 | 99.00th=[54789], 99.50th=[56361], 99.90th=[94897], 99.95th=[95945], 00:24:14.791 | 99.99th=[95945] 00:24:14.791 bw ( KiB/s): min=19200, max=32512, per=39.22%, avg=28265.45, stdev=3480.21, samples=20 00:24:14.791 iops : min= 150, max= 254, avg=220.80, stdev=27.17, samples=20 00:24:14.791 lat (msec) : 10=13.61%, 20=84.31%, 50=0.05%, 100=2.04% 00:24:14.791 cpu : usr=92.67%, sys=6.79%, ctx=22, majf=0, minf=127 00:24:14.791 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.791 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.791 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.791 filename0: (groupid=0, jobs=1): err= 0: pid=2420602: Wed May 15 02:41:00 2024 00:24:14.791 read: IOPS=118, BW=14.8MiB/s (15.5MB/s)(148MiB/10020msec) 00:24:14.791 slat (nsec): min=7838, max=43281, avg=16407.20, stdev=4562.16 00:24:14.791 clat (msec): min=10, max=139, avg=25.32, stdev=18.98 00:24:14.791 lat (msec): min=10, max=139, avg=25.34, stdev=18.98 00:24:14.791 clat percentiles (msec): 00:24:14.791 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:24:14.791 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:24:14.791 | 70.00th=[ 18], 80.00th=[ 55], 90.00th=[ 58], 95.00th=[ 59], 00:24:14.791 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 140], 00:24:14.791 | 99.99th=[ 140] 00:24:14.791 bw ( KiB/s): min=12288, max=20736, per=21.01%, avg=15142.40, stdev=2448.82, samples=20 00:24:14.791 iops : min= 96, max= 162, avg=118.30, stdev=19.13, samples=20 00:24:14.791 lat (msec) : 20=77.49%, 50=0.59%, 100=21.75%, 250=0.17% 00:24:14.791 cpu : usr=94.00%, sys=5.58%, ctx=24, majf=0, minf=169 00:24:14.791 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.791 issued rwts: total=1186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.791 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.791 00:24:14.791 Run status group 0 (all jobs): 00:24:14.791 READ: bw=70.4MiB/s (73.8MB/s), 14.8MiB/s-28.1MiB/s (15.5MB/s-29.5MB/s), io=707MiB (741MB), run=10020-10048msec 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.791 00:24:14.791 real 0m11.276s 00:24:14.791 user 0m29.435s 00:24:14.791 sys 0m2.177s 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:14.791 02:41:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.791 ************************************ 00:24:14.791 END TEST fio_dif_digest 00:24:14.791 ************************************ 00:24:14.791 02:41:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:14.791 02:41:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.791 rmmod nvme_tcp 00:24:14.791 rmmod nvme_fabrics 00:24:14.791 rmmod nvme_keyring 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2414540 ']' 00:24:14.791 02:41:00 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2414540 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 2414540 ']' 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 2414540 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2414540 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2414540' 00:24:14.791 killing process with pid 2414540 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@965 -- # kill 2414540 00:24:14.791 [2024-05-15 02:41:00.890622] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:14.791 02:41:00 nvmf_dif -- common/autotest_common.sh@970 -- # wait 2414540 00:24:14.791 02:41:01 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:14.791 02:41:01 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:15.051 Waiting for block devices as requested 00:24:15.310 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:15.310 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:15.310 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:15.310 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:15.568 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:15.568 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:15.568 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:15.568 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:15.827 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:15.827 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:15.827 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:15.827 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:15.827 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:16.085 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:16.085 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:16.085 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:16.085 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:16.342 02:41:03 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.342 02:41:03 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.342 02:41:03 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.342 02:41:03 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.342 02:41:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.342 02:41:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:16.342 02:41:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.247 02:41:05 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.247 00:24:18.247 real 1m7.437s 00:24:18.247 user 6m29.981s 00:24:18.247 sys 0m20.264s 00:24:18.247 02:41:05 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:18.247 02:41:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:18.247 ************************************ 00:24:18.247 END TEST nvmf_dif 00:24:18.247 ************************************ 00:24:18.247 02:41:05 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:18.247 02:41:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:18.247 02:41:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:18.247 02:41:05 -- common/autotest_common.sh@10 -- # set +x 00:24:18.247 ************************************ 00:24:18.247 START TEST nvmf_abort_qd_sizes 00:24:18.247 ************************************ 00:24:18.247 02:41:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:18.506 * Looking for test storage... 00:24:18.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.506 02:41:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.038 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.039 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.039 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:24:21.039 00:24:21.039 --- 10.0.0.2 ping statistics --- 00:24:21.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.039 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:21.039 00:24:21.039 --- 10.0.0.1 ping statistics --- 00:24:21.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.039 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:21.039 02:41:08 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:22.417 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:22.417 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:22.417 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:23.792 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2426053 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2426053 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 2426053 ']' 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:23.792 02:41:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:23.792 [2024-05-15 02:41:10.982813] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:24:23.793 [2024-05-15 02:41:10.982887] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.793 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.793 [2024-05-15 02:41:11.064811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.793 [2024-05-15 02:41:11.182934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.793 [2024-05-15 02:41:11.183004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.793 [2024-05-15 02:41:11.183026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.793 [2024-05-15 02:41:11.183037] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.793 [2024-05-15 02:41:11.183047] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.793 [2024-05-15 02:41:11.183097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.793 [2024-05-15 02:41:11.183125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.793 [2024-05-15 02:41:11.183184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.793 [2024-05-15 02:41:11.183187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.053 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:24.054 02:41:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:24.054 ************************************ 00:24:24.054 START TEST spdk_target_abort 00:24:24.054 ************************************ 00:24:24.054 02:41:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:24:24.054 02:41:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:24.054 02:41:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:24:24.054 02:41:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.054 02:41:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.338 spdk_targetn1 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.338 [2024-05-15 02:41:14.226968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.338 [2024-05-15 02:41:14.258964] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:27.338 [2024-05-15 02:41:14.259260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:27.338 02:41:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.338 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.627 Initializing NVMe Controllers 00:24:30.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:30.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:30.627 Initialization complete. Launching workers. 00:24:30.627 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9245, failed: 0 00:24:30.627 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 8037 00:24:30.627 success 817, unsuccess 391, failed 0 00:24:30.627 02:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.627 02:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.627 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.211 [2024-05-15 02:41:20.612998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829250 is same with the state(5) to be set 00:24:33.471 Initializing NVMe Controllers 00:24:33.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:33.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:33.471 Initialization complete. Launching workers. 00:24:33.471 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8844, failed: 0 00:24:33.471 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 7589 00:24:33.471 success 322, unsuccess 933, failed 0 00:24:33.471 02:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:33.471 02:41:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.471 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.758 Initializing NVMe Controllers 00:24:36.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:36.758 Initialization complete. Launching workers. 00:24:36.758 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31054, failed: 0 00:24:36.758 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2711, failed to submit 28343 00:24:36.758 success 497, unsuccess 2214, failed 0 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.758 02:41:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2426053 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 2426053 ']' 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 2426053 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2426053 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2426053' 00:24:38.134 killing process with pid 2426053 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 2426053 00:24:38.134 [2024-05-15 02:41:25.291377] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:38.134 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 2426053 00:24:38.394 00:24:38.394 real 0m14.169s 00:24:38.394 user 0m53.159s 00:24:38.394 sys 0m2.862s 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:38.394 ************************************ 00:24:38.394 END TEST spdk_target_abort 00:24:38.394 ************************************ 00:24:38.394 02:41:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:38.394 02:41:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:38.394 02:41:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:38.394 02:41:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:38.394 ************************************ 00:24:38.394 START TEST kernel_target_abort 00:24:38.394 ************************************ 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:38.394 02:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:39.771 Waiting for block devices as requested 00:24:39.771 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:39.771 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:39.771 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:39.771 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:39.771 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:39.771 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.030 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.030 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:40.030 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:40.030 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.290 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.290 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.290 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:40.290 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.549 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.549 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:40.549 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:40.549 02:41:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:40.809 No valid GPT data, bailing 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:40.809 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:40.809 00:24:40.809 Discovery Log Number of Records 2, Generation counter 2 00:24:40.809 =====Discovery Log Entry 0====== 00:24:40.809 trtype: tcp 00:24:40.809 adrfam: ipv4 00:24:40.809 subtype: current discovery subsystem 00:24:40.809 treq: not specified, sq flow control disable supported 00:24:40.809 portid: 1 00:24:40.809 trsvcid: 4420 00:24:40.809 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:40.809 traddr: 10.0.0.1 00:24:40.809 eflags: none 00:24:40.809 sectype: none 00:24:40.809 =====Discovery Log Entry 1====== 00:24:40.809 trtype: tcp 00:24:40.809 adrfam: ipv4 00:24:40.810 subtype: nvme subsystem 00:24:40.810 treq: not specified, sq flow control disable supported 00:24:40.810 portid: 1 00:24:40.810 trsvcid: 4420 00:24:40.810 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:40.810 traddr: 10.0.0.1 00:24:40.810 eflags: none 00:24:40.810 sectype: none 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.810 02:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.810 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.103 Initializing NVMe Controllers 00:24:44.103 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:44.103 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:44.103 Initialization complete. Launching workers. 00:24:44.103 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27179, failed: 0 00:24:44.103 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27179, failed to submit 0 00:24:44.103 success 0, unsuccess 27179, failed 0 00:24:44.103 02:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:44.103 02:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.103 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.391 Initializing NVMe Controllers 00:24:47.391 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.391 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.391 Initialization complete. Launching workers. 00:24:47.391 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53885, failed: 0 00:24:47.391 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13550, failed to submit 40335 00:24:47.391 success 0, unsuccess 13550, failed 0 00:24:47.391 02:41:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:47.391 02:41:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.391 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.926 Initializing NVMe Controllers 00:24:49.926 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:49.926 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:49.926 Initialization complete. Launching workers. 00:24:49.926 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52049, failed: 0 00:24:49.926 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12970, failed to submit 39079 00:24:49.926 success 0, unsuccess 12970, failed 0 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:49.926 02:41:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.368 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:51.368 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:51.368 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:52.305 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:52.305 00:24:52.305 real 0m14.083s 00:24:52.305 user 0m4.345s 00:24:52.305 sys 0m3.421s 00:24:52.305 02:41:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:52.305 02:41:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:52.305 ************************************ 00:24:52.305 END TEST kernel_target_abort 00:24:52.305 ************************************ 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.305 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.305 rmmod nvme_tcp 00:24:52.563 rmmod nvme_fabrics 00:24:52.563 rmmod nvme_keyring 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2426053 ']' 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2426053 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 2426053 ']' 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 2426053 00:24:52.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2426053) - No such process 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 2426053 is not found' 00:24:52.563 Process with pid 2426053 is not found 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:52.563 02:41:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:53.945 Waiting for block devices as requested 00:24:53.945 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:53.945 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:53.945 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:53.945 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:53.945 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:54.204 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:54.204 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:54.204 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:54.204 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:54.463 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:54.463 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:54.463 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:54.722 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:54.722 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:54.722 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:54.722 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:54.722 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:54.982 02:41:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.889 02:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.889 00:24:56.889 real 0m38.614s 00:24:56.889 user 0m59.964s 00:24:56.889 sys 0m10.207s 00:24:56.889 02:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:56.889 02:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:56.889 ************************************ 00:24:56.889 END TEST nvmf_abort_qd_sizes 00:24:56.889 ************************************ 00:24:56.889 02:41:44 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:56.889 02:41:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:56.889 02:41:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:56.889 02:41:44 -- common/autotest_common.sh@10 -- # set +x 00:24:57.150 ************************************ 00:24:57.150 START TEST keyring_file 00:24:57.150 ************************************ 00:24:57.150 02:41:44 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:57.150 * Looking for test storage... 00:24:57.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:24:57.150 02:41:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:24:57.150 02:41:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.150 02:41:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.150 02:41:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.150 02:41:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.150 02:41:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.150 02:41:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.150 02:41:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.150 02:41:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:57.150 02:41:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.150 02:41:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.150 02:41:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ThMhSCQ5PG 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ThMhSCQ5PG 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ThMhSCQ5PG 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ThMhSCQ5PG 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1XSnP6uafc 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:57.151 02:41:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1XSnP6uafc 00:24:57.151 02:41:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1XSnP6uafc 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1XSnP6uafc 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=2432112 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:24:57.151 02:41:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2432112 00:24:57.151 02:41:44 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2432112 ']' 00:24:57.151 02:41:44 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.151 02:41:44 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:57.151 02:41:44 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.151 02:41:44 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:57.151 02:41:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:57.151 [2024-05-15 02:41:44.504475] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:24:57.151 [2024-05-15 02:41:44.504563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432112 ] 00:24:57.151 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.411 [2024-05-15 02:41:44.572744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.411 [2024-05-15 02:41:44.682700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:58.346 02:41:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.346 [2024-05-15 02:41:45.456808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.346 null0 00:24:58.346 [2024-05-15 02:41:45.488821] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:58.346 [2024-05-15 02:41:45.488905] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:58.346 [2024-05-15 02:41:45.489406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:58.346 [2024-05-15 02:41:45.496866] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.346 02:41:45 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.346 [2024-05-15 02:41:45.508893] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:58.346 request: 00:24:58.346 { 00:24:58.346 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.346 "secure_channel": false, 00:24:58.346 "listen_address": { 00:24:58.346 "trtype": "tcp", 00:24:58.346 "traddr": "127.0.0.1", 00:24:58.346 "trsvcid": "4420" 00:24:58.346 }, 00:24:58.346 "method": "nvmf_subsystem_add_listener", 00:24:58.346 "req_id": 1 00:24:58.346 } 00:24:58.346 Got JSON-RPC error response 00:24:58.346 response: 00:24:58.346 { 00:24:58.346 "code": -32602, 00:24:58.346 "message": "Invalid parameters" 00:24:58.346 } 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:58.346 02:41:45 keyring_file -- keyring/file.sh@46 -- # bperfpid=2432250 00:24:58.346 02:41:45 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2432250 /var/tmp/bperf.sock 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2432250 ']' 00:24:58.346 02:41:45 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:58.346 02:41:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.346 [2024-05-15 02:41:45.557541] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:24:58.346 [2024-05-15 02:41:45.557636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432250 ] 00:24:58.346 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.346 [2024-05-15 02:41:45.632115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.346 [2024-05-15 02:41:45.750362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.282 02:41:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:59.282 02:41:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:59.282 02:41:46 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:24:59.282 02:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:24:59.540 02:41:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1XSnP6uafc 00:24:59.540 02:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1XSnP6uafc 00:24:59.798 02:41:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:59.798 02:41:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:59.798 02:41:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.798 02:41:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.798 02:41:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.058 02:41:47 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ThMhSCQ5PG == \/\t\m\p\/\t\m\p\.\T\h\M\h\S\C\Q\5\P\G ]] 00:25:00.058 02:41:47 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:25:00.058 02:41:47 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:00.058 02:41:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.058 02:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.058 02:41:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.319 02:41:47 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1XSnP6uafc == \/\t\m\p\/\t\m\p\.\1\X\S\n\P\6\u\a\f\c ]] 00:25:00.319 02:41:47 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:25:00.319 02:41:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:00.319 02:41:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.319 02:41:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.319 02:41:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.319 02:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.579 02:41:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:25:00.579 02:41:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:25:00.579 02:41:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:00.579 02:41:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.579 02:41:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.579 02:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.579 02:41:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.839 02:41:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:00.839 02:41:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.839 02:41:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.839 [2024-05-15 02:41:48.213775] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.098 nvme0n1 00:25:01.098 02:41:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:25:01.098 02:41:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.098 02:41:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.098 02:41:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.098 02:41:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.098 02:41:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:01.355 02:41:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:01.355 02:41:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:01.355 02:41:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:01.355 02:41:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.355 02:41:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.355 02:41:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:01.355 02:41:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.613 02:41:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:01.613 02:41:48 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.613 Running I/O for 1 seconds... 00:25:02.548 00:25:02.548 Latency(us) 00:25:02.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.548 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:02.548 nvme0n1 : 1.02 3600.76 14.07 0.00 0.00 35183.51 4369.07 79614.10 00:25:02.548 =================================================================================================================== 00:25:02.548 Total : 3600.76 14.07 0.00 0.00 35183.51 4369.07 79614.10 00:25:02.548 0 00:25:02.548 02:41:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:02.548 02:41:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:03.112 02:41:50 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.112 02:41:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:03.112 02:41:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.112 02:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.369 02:41:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:03.369 02:41:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.369 02:41:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.369 02:41:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.627 [2024-05-15 02:41:50.980638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1654f30 (107): Transport endpoint is not connected 00:25:03.627 [2024-05-15 02:41:50.980669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:03.627 [2024-05-15 02:41:50.981625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1654f30 (9): Bad file descriptor 00:25:03.627 [2024-05-15 02:41:50.982623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:03.627 [2024-05-15 02:41:50.982648] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:03.627 [2024-05-15 02:41:50.982664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:03.627 request: 00:25:03.627 { 00:25:03.627 "name": "nvme0", 00:25:03.627 "trtype": "tcp", 00:25:03.627 "traddr": "127.0.0.1", 00:25:03.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:03.627 "adrfam": "ipv4", 00:25:03.627 "trsvcid": "4420", 00:25:03.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:03.627 "psk": "key1", 00:25:03.627 "method": "bdev_nvme_attach_controller", 00:25:03.627 "req_id": 1 00:25:03.627 } 00:25:03.627 Got JSON-RPC error response 00:25:03.627 response: 00:25:03.627 { 00:25:03.627 "code": -32602, 00:25:03.627 "message": "Invalid parameters" 00:25:03.627 } 00:25:03.627 02:41:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:03.627 02:41:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:03.627 02:41:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:03.627 02:41:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:03.627 02:41:51 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:25:03.627 02:41:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.627 02:41:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.627 02:41:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.627 02:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.627 02:41:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.884 02:41:51 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:25:03.884 02:41:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:25:03.884 02:41:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.884 02:41:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.884 02:41:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.884 02:41:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.884 02:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.142 02:41:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:04.142 02:41:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:25:04.142 02:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:04.399 02:41:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:25:04.399 02:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:04.657 02:41:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:25:04.657 02:41:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:25:04.657 02:41:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.915 02:41:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:25:04.915 02:41:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ThMhSCQ5PG 00:25:04.915 02:41:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.915 02:41:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:25:04.915 02:41:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:25:05.201 [2024-05-15 02:41:52.472358] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ThMhSCQ5PG': 0100660 00:25:05.201 [2024-05-15 02:41:52.472400] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:05.201 request: 00:25:05.201 { 00:25:05.201 "name": "key0", 00:25:05.201 "path": "/tmp/tmp.ThMhSCQ5PG", 00:25:05.201 "method": "keyring_file_add_key", 00:25:05.201 "req_id": 1 00:25:05.201 } 00:25:05.201 Got JSON-RPC error response 00:25:05.201 response: 00:25:05.201 { 00:25:05.201 "code": -1, 00:25:05.201 "message": "Operation not permitted" 00:25:05.201 } 00:25:05.201 02:41:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:05.201 02:41:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.201 02:41:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.201 02:41:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.201 02:41:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ThMhSCQ5PG 00:25:05.201 02:41:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:25:05.201 02:41:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ThMhSCQ5PG 00:25:05.461 02:41:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ThMhSCQ5PG 00:25:05.461 02:41:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:25:05.461 02:41:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:05.461 02:41:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.461 02:41:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.461 02:41:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.461 02:41:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.718 02:41:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:25:05.718 02:41:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.718 02:41:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.718 02:41:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.976 [2024-05-15 02:41:53.218378] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ThMhSCQ5PG': No such file or directory 00:25:05.976 [2024-05-15 02:41:53.218412] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:05.976 [2024-05-15 02:41:53.218440] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:05.976 [2024-05-15 02:41:53.218451] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:05.976 [2024-05-15 02:41:53.218463] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:05.976 request: 00:25:05.976 { 00:25:05.976 "name": "nvme0", 00:25:05.976 "trtype": "tcp", 00:25:05.976 "traddr": "127.0.0.1", 00:25:05.976 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.976 "adrfam": "ipv4", 00:25:05.976 "trsvcid": "4420", 00:25:05.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.976 "psk": "key0", 00:25:05.976 "method": "bdev_nvme_attach_controller", 00:25:05.976 "req_id": 1 00:25:05.976 } 00:25:05.976 Got JSON-RPC error response 00:25:05.976 response: 00:25:05.976 { 00:25:05.976 "code": -19, 00:25:05.976 "message": "No such device" 00:25:05.976 } 00:25:05.976 02:41:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:05.976 02:41:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.976 02:41:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.976 02:41:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.976 02:41:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:25:05.976 02:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:06.234 02:41:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vxDnfbc5QC 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:06.234 02:41:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:06.234 02:41:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:06.234 02:41:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:06.234 02:41:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:06.234 02:41:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:06.234 02:41:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vxDnfbc5QC 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vxDnfbc5QC 00:25:06.234 02:41:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.vxDnfbc5QC 00:25:06.234 02:41:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxDnfbc5QC 00:25:06.234 02:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vxDnfbc5QC 00:25:06.492 02:41:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.492 02:41:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.750 nvme0n1 00:25:06.750 02:41:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:25:06.750 02:41:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.750 02:41:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.750 02:41:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.750 02:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.750 02:41:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.007 02:41:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:25:07.007 02:41:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:25:07.007 02:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:07.265 02:41:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:25:07.265 02:41:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:25:07.265 02:41:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.265 02:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.265 02:41:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.522 02:41:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:25:07.522 02:41:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:25:07.522 02:41:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:07.522 02:41:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.522 02:41:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.522 02:41:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.522 02:41:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.779 02:41:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:25:07.779 02:41:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:07.779 02:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:08.037 02:41:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:25:08.037 02:41:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:25:08.037 02:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.295 02:41:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:25:08.295 02:41:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxDnfbc5QC 00:25:08.295 02:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vxDnfbc5QC 00:25:08.553 02:41:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1XSnP6uafc 00:25:08.553 02:41:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1XSnP6uafc 00:25:08.810 02:41:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.810 02:41:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.068 nvme0n1 00:25:09.068 02:41:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:25:09.068 02:41:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:09.327 02:41:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:25:09.327 "subsystems": [ 00:25:09.327 { 00:25:09.327 "subsystem": "keyring", 00:25:09.327 "config": [ 00:25:09.327 { 00:25:09.327 "method": "keyring_file_add_key", 00:25:09.327 "params": { 00:25:09.327 "name": "key0", 00:25:09.327 "path": "/tmp/tmp.vxDnfbc5QC" 00:25:09.327 } 00:25:09.327 }, 00:25:09.327 { 00:25:09.327 "method": "keyring_file_add_key", 00:25:09.327 "params": { 00:25:09.327 "name": "key1", 00:25:09.327 "path": "/tmp/tmp.1XSnP6uafc" 00:25:09.327 } 00:25:09.327 } 00:25:09.327 ] 00:25:09.327 }, 00:25:09.327 { 00:25:09.327 "subsystem": "iobuf", 00:25:09.327 "config": [ 00:25:09.327 { 00:25:09.327 "method": "iobuf_set_options", 00:25:09.327 "params": { 00:25:09.327 "small_pool_count": 8192, 00:25:09.327 "large_pool_count": 1024, 00:25:09.327 "small_bufsize": 8192, 00:25:09.327 "large_bufsize": 135168 00:25:09.327 } 00:25:09.327 } 00:25:09.327 ] 00:25:09.327 }, 00:25:09.327 { 00:25:09.327 "subsystem": "sock", 00:25:09.327 "config": [ 00:25:09.327 { 00:25:09.327 "method": "sock_impl_set_options", 00:25:09.327 "params": { 00:25:09.327 "impl_name": "posix", 00:25:09.327 "recv_buf_size": 2097152, 00:25:09.327 "send_buf_size": 2097152, 00:25:09.327 "enable_recv_pipe": true, 00:25:09.327 "enable_quickack": false, 00:25:09.327 "enable_placement_id": 0, 00:25:09.327 "enable_zerocopy_send_server": true, 00:25:09.328 "enable_zerocopy_send_client": false, 00:25:09.328 "zerocopy_threshold": 0, 00:25:09.328 "tls_version": 0, 00:25:09.328 "enable_ktls": false 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "sock_impl_set_options", 00:25:09.328 "params": { 00:25:09.328 "impl_name": "ssl", 00:25:09.328 "recv_buf_size": 4096, 00:25:09.328 "send_buf_size": 4096, 00:25:09.328 "enable_recv_pipe": true, 00:25:09.328 "enable_quickack": false, 00:25:09.328 "enable_placement_id": 0, 00:25:09.328 "enable_zerocopy_send_server": true, 00:25:09.328 "enable_zerocopy_send_client": false, 00:25:09.328 "zerocopy_threshold": 0, 00:25:09.328 "tls_version": 0, 00:25:09.328 "enable_ktls": false 00:25:09.328 } 00:25:09.328 } 00:25:09.328 ] 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "subsystem": "vmd", 00:25:09.328 "config": [] 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "subsystem": "accel", 00:25:09.328 "config": [ 00:25:09.328 { 00:25:09.328 "method": "accel_set_options", 00:25:09.328 "params": { 00:25:09.328 "small_cache_size": 128, 00:25:09.328 "large_cache_size": 16, 00:25:09.328 "task_count": 2048, 00:25:09.328 "sequence_count": 2048, 00:25:09.328 "buf_count": 2048 00:25:09.328 } 00:25:09.328 } 00:25:09.328 ] 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "subsystem": "bdev", 00:25:09.328 "config": [ 00:25:09.328 { 00:25:09.328 "method": "bdev_set_options", 00:25:09.328 "params": { 00:25:09.328 "bdev_io_pool_size": 65535, 00:25:09.328 "bdev_io_cache_size": 256, 00:25:09.328 "bdev_auto_examine": true, 00:25:09.328 "iobuf_small_cache_size": 128, 00:25:09.328 "iobuf_large_cache_size": 16 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "bdev_raid_set_options", 00:25:09.328 "params": { 00:25:09.328 "process_window_size_kb": 1024 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "bdev_iscsi_set_options", 00:25:09.328 "params": { 00:25:09.328 "timeout_sec": 30 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "bdev_nvme_set_options", 00:25:09.328 "params": { 00:25:09.328 "action_on_timeout": "none", 00:25:09.328 "timeout_us": 0, 00:25:09.328 "timeout_admin_us": 0, 00:25:09.328 "keep_alive_timeout_ms": 10000, 00:25:09.328 "arbitration_burst": 0, 00:25:09.328 "low_priority_weight": 0, 00:25:09.328 "medium_priority_weight": 0, 00:25:09.328 "high_priority_weight": 0, 00:25:09.328 "nvme_adminq_poll_period_us": 10000, 00:25:09.328 "nvme_ioq_poll_period_us": 0, 00:25:09.328 "io_queue_requests": 512, 00:25:09.328 "delay_cmd_submit": true, 00:25:09.328 "transport_retry_count": 4, 00:25:09.328 "bdev_retry_count": 3, 00:25:09.328 "transport_ack_timeout": 0, 00:25:09.328 "ctrlr_loss_timeout_sec": 0, 00:25:09.328 "reconnect_delay_sec": 0, 00:25:09.328 "fast_io_fail_timeout_sec": 0, 00:25:09.328 "disable_auto_failback": false, 00:25:09.328 "generate_uuids": false, 00:25:09.328 "transport_tos": 0, 00:25:09.328 "nvme_error_stat": false, 00:25:09.328 "rdma_srq_size": 0, 00:25:09.328 "io_path_stat": false, 00:25:09.328 "allow_accel_sequence": false, 00:25:09.328 "rdma_max_cq_size": 0, 00:25:09.328 "rdma_cm_event_timeout_ms": 0, 00:25:09.328 "dhchap_digests": [ 00:25:09.328 "sha256", 00:25:09.328 "sha384", 00:25:09.328 "sha512" 00:25:09.328 ], 00:25:09.328 "dhchap_dhgroups": [ 00:25:09.328 "null", 00:25:09.328 "ffdhe2048", 00:25:09.328 "ffdhe3072", 00:25:09.328 "ffdhe4096", 00:25:09.328 "ffdhe6144", 00:25:09.328 "ffdhe8192" 00:25:09.328 ] 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "bdev_nvme_attach_controller", 00:25:09.328 "params": { 00:25:09.328 "name": "nvme0", 00:25:09.328 "trtype": "TCP", 00:25:09.328 "adrfam": "IPv4", 00:25:09.328 "traddr": "127.0.0.1", 00:25:09.328 "trsvcid": "4420", 00:25:09.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.328 "prchk_reftag": false, 00:25:09.328 "prchk_guard": false, 00:25:09.328 "ctrlr_loss_timeout_sec": 0, 00:25:09.328 "reconnect_delay_sec": 0, 00:25:09.328 "fast_io_fail_timeout_sec": 0, 00:25:09.328 "psk": "key0", 00:25:09.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.328 "hdgst": false, 00:25:09.328 "ddgst": false 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "bdev_nvme_set_hotplug", 00:25:09.328 "params": { 00:25:09.328 "period_us": 100000, 00:25:09.328 "enable": false 00:25:09.328 } 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "method": "bdev_wait_for_examine" 00:25:09.328 } 00:25:09.328 ] 00:25:09.328 }, 00:25:09.328 { 00:25:09.328 "subsystem": "nbd", 00:25:09.328 "config": [] 00:25:09.328 } 00:25:09.328 ] 00:25:09.328 }' 00:25:09.328 02:41:56 keyring_file -- keyring/file.sh@114 -- # killprocess 2432250 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2432250 ']' 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2432250 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2432250 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2432250' 00:25:09.328 killing process with pid 2432250 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@965 -- # kill 2432250 00:25:09.328 Received shutdown signal, test time was about 1.000000 seconds 00:25:09.328 00:25:09.328 Latency(us) 00:25:09.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.328 =================================================================================================================== 00:25:09.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.328 02:41:56 keyring_file -- common/autotest_common.sh@970 -- # wait 2432250 00:25:09.587 02:41:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=2433717 00:25:09.587 02:41:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2433717 /var/tmp/bperf.sock 00:25:09.587 02:41:56 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2433717 ']' 00:25:09.587 02:41:56 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.587 02:41:56 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:09.587 02:41:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:09.587 02:41:56 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.587 02:41:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:25:09.587 "subsystems": [ 00:25:09.587 { 00:25:09.587 "subsystem": "keyring", 00:25:09.587 "config": [ 00:25:09.587 { 00:25:09.587 "method": "keyring_file_add_key", 00:25:09.587 "params": { 00:25:09.587 "name": "key0", 00:25:09.587 "path": "/tmp/tmp.vxDnfbc5QC" 00:25:09.587 } 00:25:09.587 }, 00:25:09.587 { 00:25:09.587 "method": "keyring_file_add_key", 00:25:09.587 "params": { 00:25:09.587 "name": "key1", 00:25:09.587 "path": "/tmp/tmp.1XSnP6uafc" 00:25:09.587 } 00:25:09.587 } 00:25:09.587 ] 00:25:09.587 }, 00:25:09.587 { 00:25:09.587 "subsystem": "iobuf", 00:25:09.587 "config": [ 00:25:09.587 { 00:25:09.587 "method": "iobuf_set_options", 00:25:09.587 "params": { 00:25:09.587 "small_pool_count": 8192, 00:25:09.587 "large_pool_count": 1024, 00:25:09.587 "small_bufsize": 8192, 00:25:09.587 "large_bufsize": 135168 00:25:09.587 } 00:25:09.587 } 00:25:09.587 ] 00:25:09.587 }, 00:25:09.587 { 00:25:09.587 "subsystem": "sock", 00:25:09.587 "config": [ 00:25:09.587 { 00:25:09.587 "method": "sock_impl_set_options", 00:25:09.587 "params": { 00:25:09.587 "impl_name": "posix", 00:25:09.587 "recv_buf_size": 2097152, 00:25:09.587 "send_buf_size": 2097152, 00:25:09.587 "enable_recv_pipe": true, 00:25:09.587 "enable_quickack": false, 00:25:09.587 "enable_placement_id": 0, 00:25:09.587 "enable_zerocopy_send_server": true, 00:25:09.587 "enable_zerocopy_send_client": false, 00:25:09.587 "zerocopy_threshold": 0, 00:25:09.587 "tls_version": 0, 00:25:09.587 "enable_ktls": false 00:25:09.587 } 00:25:09.587 }, 00:25:09.587 { 00:25:09.587 "method": "sock_impl_set_options", 00:25:09.587 "params": { 00:25:09.587 "impl_name": "ssl", 00:25:09.587 "recv_buf_size": 4096, 00:25:09.587 "send_buf_size": 4096, 00:25:09.587 "enable_recv_pipe": true, 00:25:09.588 "enable_quickack": false, 00:25:09.588 "enable_placement_id": 0, 00:25:09.588 "enable_zerocopy_send_server": true, 00:25:09.588 "enable_zerocopy_send_client": false, 00:25:09.588 "zerocopy_threshold": 0, 00:25:09.588 "tls_version": 0, 00:25:09.588 "enable_ktls": false 00:25:09.588 } 00:25:09.588 } 00:25:09.588 ] 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "subsystem": "vmd", 00:25:09.588 "config": [] 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "subsystem": "accel", 00:25:09.588 "config": [ 00:25:09.588 { 00:25:09.588 "method": "accel_set_options", 00:25:09.588 "params": { 00:25:09.588 "small_cache_size": 128, 00:25:09.588 "large_cache_size": 16, 00:25:09.588 "task_count": 2048, 00:25:09.588 "sequence_count": 2048, 00:25:09.588 "buf_count": 2048 00:25:09.588 } 00:25:09.588 } 00:25:09.588 ] 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "subsystem": "bdev", 00:25:09.588 "config": [ 00:25:09.588 { 00:25:09.588 "method": "bdev_set_options", 00:25:09.588 "params": { 00:25:09.588 "bdev_io_pool_size": 65535, 00:25:09.588 "bdev_io_cache_size": 256, 00:25:09.588 "bdev_auto_examine": true, 00:25:09.588 "iobuf_small_cache_size": 128, 00:25:09.588 "iobuf_large_cache_size": 16 00:25:09.588 } 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "method": "bdev_raid_set_options", 00:25:09.588 "params": { 00:25:09.588 "process_window_size_kb": 1024 00:25:09.588 } 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "method": "bdev_iscsi_set_options", 00:25:09.588 "params": { 00:25:09.588 "timeout_sec": 30 00:25:09.588 } 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "method": "bdev_nvme_set_options", 00:25:09.588 "params": { 00:25:09.588 "action_on_timeout": "none", 00:25:09.588 "timeout_us": 0, 00:25:09.588 "timeout_admin_us": 0, 00:25:09.588 "keep_alive_timeout_ms": 10000, 00:25:09.588 "arbitration_burst": 0, 00:25:09.588 "low_priority_weight": 0, 00:25:09.588 "medium_priority_weight": 0, 00:25:09.588 "high_priority_weight": 0, 00:25:09.588 "nvme_adminq_poll_period_us": 10000, 00:25:09.588 "nvme_ioq_poll_period_us": 0, 00:25:09.588 "io_queue_requests": 512, 00:25:09.588 "delay_cmd_submit": true, 00:25:09.588 "transport_retry_count": 4, 00:25:09.588 "bdev_retry_count": 3, 00:25:09.588 "transport_ack_timeout": 0, 00:25:09.588 "ctrlr_loss_timeout_sec": 0, 00:25:09.588 "reconnect_delay_sec": 0, 00:25:09.588 "fast_io_fail_timeout_sec": 0, 00:25:09.588 "disable_auto_failback": false, 00:25:09.588 "generate_uuids": false, 00:25:09.588 "transport_tos": 0, 00:25:09.588 "nvme_error_stat": false, 00:25:09.588 "rdma_srq_size": 0, 00:25:09.588 "io_path_stat": false, 00:25:09.588 "allow_accel_sequence": false, 00:25:09.588 "rdma_max_cq_size": 0, 00:25:09.588 "rdma_cm_event_timeout_ms": 0, 00:25:09.588 "dhchap_digests": [ 00:25:09.588 "sha256", 00:25:09.588 "sha384", 00:25:09.588 "sha512" 00:25:09.588 ], 00:25:09.588 "dhchap_dhgroups": [ 00:25:09.588 "null", 00:25:09.588 "ffdhe2048", 00:25:09.588 "ffdhe3072", 00:25:09.588 "ffdhe4096", 00:25:09.588 "ffdhe6144", 00:25:09.588 "ffdhe8192" 00:25:09.588 ] 00:25:09.588 } 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "method": "bdev_nvme_attach_controller", 00:25:09.588 "params": { 00:25:09.588 "name": "nvme0", 00:25:09.588 "trtype": "TCP", 00:25:09.588 "adrfam": "IPv4", 00:25:09.588 "traddr": "127.0.0.1", 00:25:09.588 "trsvcid": "4420", 00:25:09.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.588 "prchk_reftag": false, 00:25:09.588 "prchk_guard": false, 00:25:09.588 "ctrlr_loss_timeout_sec": 0, 00:25:09.588 "reconnect_delay_sec": 0, 00:25:09.588 "fast_io_fail_timeout_sec": 0, 00:25:09.588 "psk": "key0", 00:25:09.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.588 "hdgst": false, 00:25:09.588 "ddgst": false 00:25:09.588 } 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "method": "bdev_nvme_set_hotplug", 00:25:09.588 "params": { 00:25:09.588 "period_us": 100000, 00:25:09.588 "enable": false 00:25:09.588 } 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "method": "bdev_wait_for_examine" 00:25:09.588 } 00:25:09.588 ] 00:25:09.588 }, 00:25:09.588 { 00:25:09.588 "subsystem": "nbd", 00:25:09.588 "config": [] 00:25:09.588 } 00:25:09.588 ] 00:25:09.588 }' 00:25:09.588 02:41:56 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:09.588 02:41:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:09.848 [2024-05-15 02:41:57.035353] Starting SPDK v24.05-pre git sha1 0ed7af446 / DPDK 23.11.0 initialization... 00:25:09.848 [2024-05-15 02:41:57.035421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433717 ] 00:25:09.848 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.848 [2024-05-15 02:41:57.107049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.848 [2024-05-15 02:41:57.222616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.108 [2024-05-15 02:41:57.408782] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.674 02:41:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:10.674 02:41:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:10.674 02:41:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:25:10.674 02:41:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:25:10.674 02:41:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.931 02:41:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:25:10.931 02:41:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:25:10.931 02:41:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:10.931 02:41:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.931 02:41:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.931 02:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.932 02:41:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.190 02:41:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:11.190 02:41:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:25:11.190 02:41:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:11.190 02:41:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.190 02:41:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.190 02:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.190 02:41:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:11.448 02:41:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:25:11.448 02:41:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:25:11.448 02:41:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:25:11.448 02:41:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:11.707 02:41:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:25:11.707 02:41:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:11.707 02:41:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.vxDnfbc5QC /tmp/tmp.1XSnP6uafc 00:25:11.707 02:41:58 keyring_file -- keyring/file.sh@20 -- # killprocess 2433717 00:25:11.707 02:41:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2433717 ']' 00:25:11.707 02:41:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2433717 00:25:11.707 02:41:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:11.707 02:41:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.707 02:41:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2433717 00:25:11.707 02:41:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:11.707 02:41:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:11.707 02:41:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2433717' 00:25:11.707 killing process with pid 2433717 00:25:11.707 02:41:59 keyring_file -- common/autotest_common.sh@965 -- # kill 2433717 00:25:11.707 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.707 00:25:11.707 Latency(us) 00:25:11.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.708 =================================================================================================================== 00:25:11.708 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.708 02:41:59 keyring_file -- common/autotest_common.sh@970 -- # wait 2433717 00:25:11.968 02:41:59 keyring_file -- keyring/file.sh@21 -- # killprocess 2432112 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2432112 ']' 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2432112 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2432112 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2432112' 00:25:11.968 killing process with pid 2432112 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@965 -- # kill 2432112 00:25:11.968 [2024-05-15 02:41:59.307183] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:11.968 [2024-05-15 02:41:59.307256] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:11.968 02:41:59 keyring_file -- common/autotest_common.sh@970 -- # wait 2432112 00:25:12.535 00:25:12.535 real 0m15.437s 00:25:12.535 user 0m37.296s 00:25:12.535 sys 0m3.325s 00:25:12.535 02:41:59 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:12.535 02:41:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 ************************************ 00:25:12.535 END TEST keyring_file 00:25:12.535 ************************************ 00:25:12.536 02:41:59 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:25:12.536 02:41:59 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:12.536 02:41:59 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:25:12.536 02:41:59 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:12.536 02:41:59 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:12.536 02:41:59 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:12.536 02:41:59 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:25:12.536 02:41:59 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:25:12.536 02:41:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:12.536 02:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 02:41:59 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:25:12.536 02:41:59 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:25:12.536 02:41:59 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:25:12.536 02:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:14.439 INFO: APP EXITING 00:25:14.439 INFO: killing all VMs 00:25:14.439 INFO: killing vhost app 00:25:14.439 INFO: EXIT DONE 00:25:15.374 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:25:15.374 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:25:15.374 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:25:15.374 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:25:15.633 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:25:15.633 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:25:15.633 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:25:15.633 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:25:15.633 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:25:15.633 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:25:15.633 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:25:15.633 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:25:15.633 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:25:15.633 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:25:15.633 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:25:15.633 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:25:15.633 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:25:17.009 Cleaning 00:25:17.009 Removing: /var/run/dpdk/spdk0/config 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:17.009 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:17.009 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:17.009 Removing: /var/run/dpdk/spdk1/config 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:17.009 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:17.009 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:17.009 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:17.009 Removing: /var/run/dpdk/spdk2/config 00:25:17.009 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:17.009 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:17.269 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:17.269 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:17.269 Removing: /var/run/dpdk/spdk3/config 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:17.269 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:17.269 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:17.269 Removing: /var/run/dpdk/spdk4/config 00:25:17.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:17.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:17.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:17.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:17.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:17.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:17.270 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:17.270 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:17.270 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:17.270 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:17.270 Removing: /dev/shm/bdev_svc_trace.1 00:25:17.270 Removing: /dev/shm/nvmf_trace.0 00:25:17.270 Removing: /dev/shm/spdk_tgt_trace.pid2176335 00:25:17.270 Removing: /var/run/dpdk/spdk0 00:25:17.270 Removing: /var/run/dpdk/spdk1 00:25:17.270 Removing: /var/run/dpdk/spdk2 00:25:17.270 Removing: /var/run/dpdk/spdk3 00:25:17.270 Removing: /var/run/dpdk/spdk4 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2174661 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2175413 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2176335 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2176784 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2177474 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2177743 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2178456 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2178592 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2178842 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2180036 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2181078 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2181416 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2181748 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2182035 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2182361 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2182516 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2183035 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2183374 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2183938 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2186285 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2186452 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2186747 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2186753 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2187182 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2187195 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2187615 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2187627 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2187841 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2187934 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2188185 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2188240 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2188725 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2188885 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2189080 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2189379 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2189401 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2189591 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2189744 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2189941 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2190184 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2190337 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2190567 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2190775 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2190935 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2191205 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2191370 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2191521 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2191801 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2191957 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2192116 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2192387 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2192550 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2192717 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2192988 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2193146 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2193565 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2193835 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2194023 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2194241 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2196747 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2226503 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2229542 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2236812 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2240529 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2243306 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2243810 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2251793 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2251796 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2252457 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2252997 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2253655 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2254061 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2254064 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2254404 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2254452 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2254573 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2255365 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2256289 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2256948 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2257351 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2257353 00:25:17.270 Removing: /var/run/dpdk/spdk_pid2257616 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2258512 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2259348 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2265011 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2265294 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2268210 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2272457 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2276384 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2283631 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2289783 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2291086 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2292260 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2303446 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2306086 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2309281 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2310461 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2311718 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2311807 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2311942 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2312082 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2312527 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2313842 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2314707 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2315138 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2316848 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2317320 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2317880 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2320701 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2327807 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2330590 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2334902 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2335849 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2337067 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2339909 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2342700 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2347747 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2347749 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2350929 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2351069 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2351203 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2351596 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2351601 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2354384 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2354834 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2357670 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2359648 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2363356 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2367230 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2374386 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2379235 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2379237 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2392405 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2392943 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2393353 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2393835 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2394603 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2395034 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2395546 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2395972 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2398881 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2399139 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2403407 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2403981 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2405751 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2411220 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2411277 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2414632 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2416034 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2417437 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2418182 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2419591 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2420468 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2426439 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2426752 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2427146 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2428793 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2429075 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2429476 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2432112 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2432250 00:25:17.529 Removing: /var/run/dpdk/spdk_pid2433717 00:25:17.529 Clean 00:25:17.529 02:42:04 -- common/autotest_common.sh@1447 -- # return 0 00:25:17.529 02:42:04 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:25:17.529 02:42:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.529 02:42:04 -- common/autotest_common.sh@10 -- # set +x 00:25:17.529 02:42:04 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:25:17.529 02:42:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.529 02:42:04 -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 02:42:04 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:25:17.788 02:42:04 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:25:17.788 02:42:04 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:25:17.788 02:42:04 -- spdk/autotest.sh@387 -- # hash lcov 00:25:17.788 02:42:04 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:17.788 02:42:04 -- spdk/autotest.sh@389 -- # hostname 00:25:17.788 02:42:04 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:25:17.788 geninfo: WARNING: invalid characters removed from testname! 00:25:49.891 02:42:32 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:49.891 02:42:36 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:52.428 02:42:39 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:55.719 02:42:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:58.255 02:42:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:01.547 02:42:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:04.085 02:42:51 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:04.085 02:42:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.085 02:42:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:04.085 02:42:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.085 02:42:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.085 02:42:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.085 02:42:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.085 02:42:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.085 02:42:51 -- paths/export.sh@5 -- $ export PATH 00:26:04.085 02:42:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.085 02:42:51 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:26:04.085 02:42:51 -- common/autobuild_common.sh@437 -- $ date +%s 00:26:04.085 02:42:51 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715733771.XXXXXX 00:26:04.085 02:42:51 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715733771.yR3Sko 00:26:04.085 02:42:51 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:26:04.085 02:42:51 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:26:04.085 02:42:51 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:26:04.085 02:42:51 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:26:04.085 02:42:51 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:26:04.085 02:42:51 -- common/autobuild_common.sh@453 -- $ get_config_params 00:26:04.085 02:42:51 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:26:04.085 02:42:51 -- common/autotest_common.sh@10 -- $ set +x 00:26:04.085 02:42:51 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:26:04.085 02:42:51 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:26:04.085 02:42:51 -- pm/common@17 -- $ local monitor 00:26:04.085 02:42:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.085 02:42:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.085 02:42:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.085 02:42:51 -- pm/common@21 -- $ date +%s 00:26:04.085 02:42:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.085 02:42:51 -- pm/common@21 -- $ date +%s 00:26:04.085 02:42:51 -- pm/common@25 -- $ sleep 1 00:26:04.085 02:42:51 -- pm/common@21 -- $ date +%s 00:26:04.085 02:42:51 -- pm/common@21 -- $ date +%s 00:26:04.085 02:42:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715733771 00:26:04.085 02:42:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715733771 00:26:04.085 02:42:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715733771 00:26:04.085 02:42:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715733771 00:26:04.085 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715733771_collect-vmstat.pm.log 00:26:04.085 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715733771_collect-cpu-load.pm.log 00:26:04.085 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715733771_collect-cpu-temp.pm.log 00:26:04.085 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715733771_collect-bmc-pm.bmc.pm.log 00:26:05.047 02:42:52 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:26:05.047 02:42:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:26:05.047 02:42:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:05.047 02:42:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:05.047 02:42:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:05.047 02:42:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:05.047 02:42:52 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:05.047 02:42:52 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:05.047 02:42:52 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:26:05.047 02:42:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:05.047 02:42:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:05.047 02:42:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:05.047 02:42:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:05.047 02:42:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.047 02:42:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:26:05.047 02:42:52 -- pm/common@44 -- $ pid=2443513 00:26:05.047 02:42:52 -- pm/common@50 -- $ kill -TERM 2443513 00:26:05.047 02:42:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.047 02:42:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:26:05.047 02:42:52 -- pm/common@44 -- $ pid=2443515 00:26:05.047 02:42:52 -- pm/common@50 -- $ kill -TERM 2443515 00:26:05.047 02:42:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.047 02:42:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:26:05.047 02:42:52 -- pm/common@44 -- $ pid=2443517 00:26:05.047 02:42:52 -- pm/common@50 -- $ kill -TERM 2443517 00:26:05.047 02:42:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.047 02:42:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:26:05.047 02:42:52 -- pm/common@44 -- $ pid=2443551 00:26:05.047 02:42:52 -- pm/common@50 -- $ sudo -E kill -TERM 2443551 00:26:05.047 + [[ -n 2088849 ]] 00:26:05.047 + sudo kill 2088849 00:26:05.060 [Pipeline] } 00:26:05.081 [Pipeline] // stage 00:26:05.086 [Pipeline] } 00:26:05.096 [Pipeline] // timeout 00:26:05.100 [Pipeline] } 00:26:05.110 [Pipeline] // catchError 00:26:05.119 [Pipeline] } 00:26:05.135 [Pipeline] // wrap 00:26:05.141 [Pipeline] } 00:26:05.155 [Pipeline] // catchError 00:26:05.163 [Pipeline] stage 00:26:05.164 [Pipeline] { (Epilogue) 00:26:05.177 [Pipeline] catchError 00:26:05.178 [Pipeline] { 00:26:05.192 [Pipeline] echo 00:26:05.193 Cleanup processes 00:26:05.198 [Pipeline] sh 00:26:05.483 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:05.483 2443673 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:26:05.483 2443782 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:05.497 [Pipeline] sh 00:26:05.782 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:05.782 ++ grep -v 'sudo pgrep' 00:26:05.782 ++ awk '{print $1}' 00:26:05.782 + sudo kill -9 2443673 00:26:05.793 [Pipeline] sh 00:26:06.077 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:14.216 [Pipeline] sh 00:26:14.503 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:14.503 Artifacts sizes are good 00:26:14.520 [Pipeline] archiveArtifacts 00:26:14.527 Archiving artifacts 00:26:14.741 [Pipeline] sh 00:26:15.026 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:26:15.042 [Pipeline] cleanWs 00:26:15.052 [WS-CLEANUP] Deleting project workspace... 00:26:15.052 [WS-CLEANUP] Deferred wipeout is used... 00:26:15.059 [WS-CLEANUP] done 00:26:15.061 [Pipeline] } 00:26:15.081 [Pipeline] // catchError 00:26:15.093 [Pipeline] sh 00:26:15.374 + logger -p user.info -t JENKINS-CI 00:26:15.383 [Pipeline] } 00:26:15.399 [Pipeline] // stage 00:26:15.405 [Pipeline] } 00:26:15.422 [Pipeline] // node 00:26:15.428 [Pipeline] End of Pipeline 00:26:15.467 Finished: SUCCESS